Test Report: Hyperkit_macOS 19662

                    
                      3f64d3c641e64b460ff7a3cff080aebef74ca5ca:2024-09-17:36258
                    
                

Test fail (20/214)

x
+
TestOffline (195.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-248000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-248000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m10.221658958s)

                                                
                                                
-- stdout --
	* [offline-docker-248000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-248000" primary control-plane node in "offline-docker-248000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-248000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:51:33.353767    6033 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:51:33.353974    6033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:51:33.353979    6033 out.go:358] Setting ErrFile to fd 2...
	I0917 10:51:33.353983    6033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:51:33.354205    6033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:51:33.356459    6033 out.go:352] Setting JSON to false
	I0917 10:51:33.382941    6033 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4860,"bootTime":1726590633,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:51:33.383124    6033 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:51:33.446427    6033 out.go:177] * [offline-docker-248000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:51:33.490258    6033 notify.go:220] Checking for updates...
	I0917 10:51:33.514212    6033 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:51:33.578972    6033 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:51:33.599756    6033 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:51:33.621981    6033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:51:33.642938    6033 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:51:33.663741    6033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:51:33.685182    6033 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:51:33.713857    6033 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 10:51:33.756099    6033 start.go:297] selected driver: hyperkit
	I0917 10:51:33.756132    6033 start.go:901] validating driver "hyperkit" against <nil>
	I0917 10:51:33.756153    6033 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:51:33.760801    6033 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:51:33.760921    6033 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:51:33.769267    6033 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:51:33.773073    6033 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:51:33.773110    6033 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:51:33.773144    6033 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:51:33.773383    6033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:51:33.773420    6033 cni.go:84] Creating CNI manager for ""
	I0917 10:51:33.773463    6033 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:51:33.773470    6033 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:51:33.773542    6033 start.go:340] cluster config:
	{Name:offline-docker-248000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:51:33.773622    6033 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:51:33.820151    6033 out.go:177] * Starting "offline-docker-248000" primary control-plane node in "offline-docker-248000" cluster
	I0917 10:51:33.862077    6033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:51:33.862162    6033 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:51:33.862190    6033 cache.go:56] Caching tarball of preloaded images
	I0917 10:51:33.862409    6033 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:51:33.862427    6033 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:51:33.862908    6033 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/offline-docker-248000/config.json ...
	I0917 10:51:33.862948    6033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/offline-docker-248000/config.json: {Name:mkb41c48c5096b22a37ba79b64084e45d524f145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:51:33.883835    6033 start.go:360] acquireMachinesLock for offline-docker-248000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:51:33.884006    6033 start.go:364] duration metric: took 124.708µs to acquireMachinesLock for "offline-docker-248000"
	I0917 10:51:33.884063    6033 start.go:93] Provisioning new machine with config: &{Name:offline-docker-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:51:33.884170    6033 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 10:51:33.905938    6033 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:51:33.906096    6033 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:51:33.906133    6033 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:51:33.914934    6033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53709
	I0917 10:51:33.915320    6033 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:51:33.915726    6033 main.go:141] libmachine: Using API Version  1
	I0917 10:51:33.915739    6033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:51:33.915997    6033 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:51:33.916122    6033 main.go:141] libmachine: (offline-docker-248000) Calling .GetMachineName
	I0917 10:51:33.916217    6033 main.go:141] libmachine: (offline-docker-248000) Calling .DriverName
	I0917 10:51:33.916355    6033 start.go:159] libmachine.API.Create for "offline-docker-248000" (driver="hyperkit")
	I0917 10:51:33.916381    6033 client.go:168] LocalClient.Create starting
	I0917 10:51:33.916418    6033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 10:51:33.916480    6033 main.go:141] libmachine: Decoding PEM data...
	I0917 10:51:33.916501    6033 main.go:141] libmachine: Parsing certificate...
	I0917 10:51:33.916623    6033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 10:51:33.916686    6033 main.go:141] libmachine: Decoding PEM data...
	I0917 10:51:33.916705    6033 main.go:141] libmachine: Parsing certificate...
	I0917 10:51:33.916725    6033 main.go:141] libmachine: Running pre-create checks...
	I0917 10:51:33.916737    6033 main.go:141] libmachine: (offline-docker-248000) Calling .PreCreateCheck
	I0917 10:51:33.916884    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:33.917023    6033 main.go:141] libmachine: (offline-docker-248000) Calling .GetConfigRaw
	I0917 10:51:33.926995    6033 main.go:141] libmachine: Creating machine...
	I0917 10:51:33.927008    6033 main.go:141] libmachine: (offline-docker-248000) Calling .Create
	I0917 10:51:33.927136    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:33.927275    6033 main.go:141] libmachine: (offline-docker-248000) DBG | I0917 10:51:33.927135    6054 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:51:33.927359    6033 main.go:141] libmachine: (offline-docker-248000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 10:51:34.371662    6033 main.go:141] libmachine: (offline-docker-248000) DBG | I0917 10:51:34.371582    6054 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/id_rsa...
	I0917 10:51:34.447998    6033 main.go:141] libmachine: (offline-docker-248000) DBG | I0917 10:51:34.447923    6054 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/offline-docker-248000.rawdisk...
	I0917 10:51:34.448013    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Writing magic tar header
	I0917 10:51:34.448024    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Writing SSH key tar header
	I0917 10:51:34.490111    6033 main.go:141] libmachine: (offline-docker-248000) DBG | I0917 10:51:34.490053    6054 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000 ...
	I0917 10:51:34.877427    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:34.877447    6033 main.go:141] libmachine: (offline-docker-248000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/hyperkit.pid
	I0917 10:51:34.877482    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Using UUID fe1689cd-50e0-4d25-8bf8-7f09b99f180b
	I0917 10:51:35.189649    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Generated MAC b2:53:9b:71:f9:47
	I0917 10:51:35.189669    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-248000
	I0917 10:51:35.189705    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe1689cd-50e0-4d25-8bf8-7f09b99f180b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002041b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0917 10:51:35.189739    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe1689cd-50e0-4d25-8bf8-7f09b99f180b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002041b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0917 10:51:35.189825    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fe1689cd-50e0-4d25-8bf8-7f09b99f180b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/offline-docker-248000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/bzimage,
/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-248000"}
	I0917 10:51:35.189892    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fe1689cd-50e0-4d25-8bf8-7f09b99f180b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/offline-docker-248000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machi
nes/offline-docker-248000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-248000"
	I0917 10:51:35.189910    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:51:35.192837    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 DEBUG: hyperkit: Pid is 6078
	I0917 10:51:35.195915    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 0
	I0917 10:51:35.195929    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:35.195984    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:35.197344    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:35.197411    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:35.197427    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:35.197441    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:35.197447    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:35.197463    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:35.197492    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:35.197502    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:35.197509    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:35.197516    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:35.197523    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:35.197532    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:35.197542    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:35.197560    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:35.197574    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:35.197585    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:35.197598    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:35.197613    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:35.197623    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:35.201890    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:51:35.255000    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:51:35.273648    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:51:35.273688    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:51:35.273698    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:51:35.273705    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:51:35.651036    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:51:35.651063    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:51:35.766268    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:51:35.766302    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:51:35.766314    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:51:35.766324    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:51:35.767056    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:51:35.767073    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:51:37.199294    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 1
	I0917 10:51:37.199308    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:37.199344    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:37.200305    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:37.200376    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:37.200398    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:37.200405    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:37.200416    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:37.200425    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:37.200431    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:37.200439    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:37.200447    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:37.200476    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:37.200486    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:37.200493    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:37.200503    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:37.200510    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:37.200518    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:37.200527    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:37.200537    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:37.200545    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:37.200555    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:39.201055    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 2
	I0917 10:51:39.201072    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:39.201127    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:39.201999    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:39.202066    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:39.202076    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:39.202094    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:39.202100    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:39.202106    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:39.202112    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:39.202124    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:39.202134    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:39.202140    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:39.202147    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:39.202160    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:39.202172    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:39.202184    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:39.202192    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:39.202199    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:39.202207    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:39.202230    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:39.202248    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:41.163266    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:41 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:51:41.163455    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:41 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:51:41.163463    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:41 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:51:41.183072    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:51:41 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:51:41.203895    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 3
	I0917 10:51:41.203924    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:41.204072    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:41.205714    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:41.205783    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:41.205805    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:41.205846    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:41.205868    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:41.205920    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:41.205938    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:41.205949    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:41.205960    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:41.205968    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:41.205979    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:41.205988    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:41.205998    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:41.206028    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:41.206053    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:41.206074    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:41.206086    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:41.206107    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:41.206124    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:43.206192    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 4
	I0917 10:51:43.206238    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:43.206312    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:43.207217    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:43.207304    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:43.207318    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:43.207333    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:43.207344    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:43.207352    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:43.207362    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:43.207370    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:43.207379    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:43.207397    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:43.207409    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:43.207416    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:43.207424    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:43.207431    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:43.207438    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:43.207446    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:43.207454    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:43.207461    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:43.207468    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:45.209517    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 5
	I0917 10:51:45.209534    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:45.209567    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:45.210433    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:45.210487    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:45.210499    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:45.210508    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:45.210520    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:45.210527    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:45.210560    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:45.210581    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:45.210595    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:45.210602    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:45.210610    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:45.210619    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:45.210627    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:45.210641    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:45.210655    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:45.210664    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:45.210675    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:45.210685    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:45.210697    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:47.210978    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 6
	I0917 10:51:47.210990    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:47.211056    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:47.211927    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:47.211983    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:47.211995    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:47.212004    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:47.212013    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:47.212032    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:47.212038    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:47.212044    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:47.212052    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:47.212059    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:47.212064    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:47.212077    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:47.212090    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:47.212098    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:47.212106    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:47.212120    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:47.212130    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:47.212137    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:47.212144    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:49.214188    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 7
	I0917 10:51:49.214215    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:49.214243    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:49.215105    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:49.215172    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:49.215182    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:49.215205    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:49.215215    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:49.215256    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:49.215269    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:49.215279    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:49.215285    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:49.215294    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:49.215319    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:49.215333    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:49.215346    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:49.215357    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:49.215364    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:49.215371    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:49.215379    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:49.215385    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:49.215393    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:51.217387    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 8
	I0917 10:51:51.217400    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:51.217444    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:51.218309    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:51.218366    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:51.218376    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:51.218383    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:51.218397    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:51.218413    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:51.218425    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:51.218441    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:51.218462    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:51.218470    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:51.218478    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:51.218492    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:51.218506    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:51.218529    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:51.218542    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:51.218550    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:51.218558    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:51.218565    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:51.218572    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:53.218875    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 9
	I0917 10:51:53.218891    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:53.218986    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:53.219861    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:53.219953    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:53.219964    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:53.219973    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:53.219981    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:53.219991    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:53.219998    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:53.220005    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:53.220011    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:53.220026    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:53.220034    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:53.220043    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:53.220053    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:53.220060    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:53.220068    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:53.220075    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:53.220082    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:53.220096    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:53.220107    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:55.222114    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 10
	I0917 10:51:55.222130    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:55.222140    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:55.223033    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:55.223070    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:55.223078    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:55.223086    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:55.223091    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:55.223105    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:55.223115    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:55.223124    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:55.223129    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:55.223135    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:55.223143    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:55.223150    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:55.223158    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:55.223165    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:55.223172    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:55.223183    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:55.223199    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:55.223216    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:55.223226    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:57.224191    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 11
	I0917 10:51:57.224214    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:57.224316    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:57.225206    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:57.225246    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:57.225259    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:57.225269    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:57.225285    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:57.225292    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:57.225299    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:57.225305    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:57.225312    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:57.225317    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:57.225332    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:57.225341    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:57.225352    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:57.225359    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:57.225373    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:57.225387    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:57.225401    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:57.225410    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:57.225423    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:51:59.227439    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 12
	I0917 10:51:59.227452    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:51:59.227512    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:51:59.228398    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:51:59.228458    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:51:59.228469    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:51:59.228477    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:51:59.228482    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:51:59.228488    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:51:59.228496    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:51:59.228502    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:51:59.228533    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:51:59.228543    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:51:59.228551    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:51:59.228578    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:51:59.228592    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:51:59.228609    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:51:59.228622    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:51:59.228638    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:51:59.228647    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:51:59.228654    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:51:59.228662    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:01.229577    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 13
	I0917 10:52:01.229593    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:01.229630    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:01.230514    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:01.230558    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:01.230567    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:01.230581    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:01.230595    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:01.230602    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:01.230611    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:01.230647    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:01.230664    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:01.230676    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:01.230702    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:01.230714    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:01.230722    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:01.230728    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:01.230746    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:01.230759    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:01.230767    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:01.230775    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:01.230783    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:03.232139    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 14
	I0917 10:52:03.232152    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:03.232214    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:03.233094    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:03.233137    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:03.233146    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:03.233156    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:03.233164    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:03.233181    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:03.233189    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:03.233195    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:03.233201    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:03.233207    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:03.233213    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:03.233229    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:03.233238    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:03.233246    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:03.233255    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:03.233271    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:03.233282    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:03.233289    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:03.233297    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:05.235047    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 15
	I0917 10:52:05.235063    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:05.235121    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:05.236036    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:05.236096    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:05.236106    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:05.236116    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:05.236125    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:05.236132    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:05.236138    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:05.236152    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:05.236161    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:05.236168    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:05.236177    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:05.236192    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:05.236216    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:05.236224    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:05.236231    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:05.236239    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:05.236252    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:05.236266    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:05.236275    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:07.237022    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 16
	I0917 10:52:07.237037    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:07.237138    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:07.238020    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:07.238069    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:07.238092    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:07.238104    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:07.238109    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:07.238116    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:07.238126    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:07.238134    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:07.238140    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:07.238146    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:07.238152    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:07.238159    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:07.238166    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:07.238173    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:07.238182    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:07.238190    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:07.238196    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:07.238210    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:07.238230    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:09.240277    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 17
	I0917 10:52:09.240290    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:09.240344    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:09.241235    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:09.241275    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:09.241283    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:09.241310    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:09.241322    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:09.241329    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:09.241337    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:09.241349    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:09.241357    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:09.241373    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:09.241390    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:09.241399    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:09.241407    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:09.241436    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:09.241448    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:09.241456    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:09.241470    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:09.241477    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:09.241498    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:11.241531    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 18
	I0917 10:52:11.241544    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:11.241595    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:11.242534    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:11.242584    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:11.242596    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:11.242605    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:11.242614    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:11.242623    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:11.242631    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:11.242645    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:11.242655    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:11.242662    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:11.242669    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:11.242676    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:11.242692    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:11.242705    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:11.242712    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:11.242718    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:11.242725    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:11.242733    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:11.242741    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:13.244071    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 19
	I0917 10:52:13.244082    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:13.244146    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:13.245264    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:13.245295    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:13.245302    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:13.245313    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:13.245320    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:13.245343    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:13.245366    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:13.245383    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:13.245392    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:13.245400    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:13.245408    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:13.245415    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:13.245422    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:13.245429    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:13.245434    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:13.245441    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:13.245448    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:13.245454    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:13.245460    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:15.247509    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 20
	I0917 10:52:15.247523    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:15.247596    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:15.248497    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:15.248553    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:15.248565    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:15.248577    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:15.248584    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:15.248595    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:15.248601    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:15.248626    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:15.248635    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:15.248641    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:15.248650    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:15.248656    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:15.248678    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:15.248688    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:15.248704    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:15.248712    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:15.248719    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:15.248726    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:15.248739    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:17.250767    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 21
	I0917 10:52:17.250784    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:17.250840    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:17.251743    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:17.251785    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:17.251805    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:17.251815    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:17.251821    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:17.251828    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:17.251834    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:17.251840    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:17.251850    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:17.251856    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:17.251863    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:17.251868    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:17.251880    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:17.251889    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:17.251895    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:17.251903    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:17.251927    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:17.251939    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:17.251957    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:19.254007    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 22
	I0917 10:52:19.254022    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:19.254060    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:19.255096    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:19.255130    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:19.255140    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:19.255149    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:19.255156    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:19.255169    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:19.255183    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:19.255190    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:19.255207    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:19.255215    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:19.255221    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:19.255230    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:19.255238    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:19.255248    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:19.255262    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:19.255274    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:19.255282    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:19.255291    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:19.255304    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:21.255933    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 23
	I0917 10:52:21.255949    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:21.255959    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:21.256836    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:21.256881    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:21.256889    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:21.256900    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:21.256906    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:21.256912    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:21.256918    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:21.256924    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:21.256956    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:21.256965    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:21.256982    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:21.256989    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:21.256996    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:21.257004    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:21.257010    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:21.257016    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:21.257032    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:21.257046    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:21.257062    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:23.259115    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 24
	I0917 10:52:23.259129    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:23.259195    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:23.260118    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:23.260127    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:23.260136    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:23.260144    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:23.260151    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:23.260157    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:23.260164    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:23.260177    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:23.260187    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:23.260195    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:23.260201    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:23.260208    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:23.260225    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:23.260236    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:23.260253    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:23.260266    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:23.260284    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:23.260292    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:23.260300    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:25.261396    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 25
	I0917 10:52:25.261408    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:25.261439    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:25.262296    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:25.262326    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:25.262333    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:25.262340    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:25.262347    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:25.262355    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:25.262361    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:25.262368    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:25.262374    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:25.262392    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:25.262401    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:25.262408    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:25.262416    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:25.262431    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:25.262442    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:25.262458    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:25.262471    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:25.262482    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:25.262491    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:27.264548    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 26
	I0917 10:52:27.264559    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:27.264618    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:27.265692    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:27.265732    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:27.265753    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:27.265773    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:27.265787    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:27.265803    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:27.265814    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:27.265824    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:27.265833    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:27.265840    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:27.265846    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:27.265851    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:27.265856    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:27.265863    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:27.265868    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:27.265875    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:27.265887    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:27.265900    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:27.265915    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:29.267936    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 27
	I0917 10:52:29.267950    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:29.268017    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:29.268878    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:29.268927    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:29.268939    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:29.268956    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:29.268962    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:29.268968    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:29.268977    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:29.269000    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:29.269013    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:29.269020    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:29.269026    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:29.269032    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:29.269038    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:29.269043    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:29.269058    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:29.269072    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:29.269082    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:29.269088    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:29.269095    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:31.269743    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 28
	I0917 10:52:31.269756    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:31.269827    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:31.270761    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:31.270800    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:31.270812    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:31.270826    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:31.270852    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:31.270860    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:31.270866    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:31.270873    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:31.270886    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:31.270898    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:31.270907    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:31.270914    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:31.270922    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:31.270937    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:31.270950    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:31.270956    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:31.270962    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:31.270970    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:31.270978    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:33.273012    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 29
	I0917 10:52:33.273028    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:33.273080    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:33.273948    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for b2:53:9b:71:f9:47 in /var/db/dhcpd_leases ...
	I0917 10:52:33.273991    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:33.274002    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:33.274012    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:33.274021    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:33.274037    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:33.274045    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:33.274052    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:33.274068    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:33.274079    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:33.274088    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:33.274096    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:33.274114    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:33.274125    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:33.274133    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:33.274140    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:33.274149    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:33.274163    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:33.274184    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:35.276262    6033 client.go:171] duration metric: took 1m1.359565478s to LocalClient.Create
	I0917 10:52:37.278470    6033 start.go:128] duration metric: took 1m3.393971228s to createHost
	I0917 10:52:37.278485    6033 start.go:83] releasing machines lock for "offline-docker-248000", held for 1m3.394151585s
	W0917 10:52:37.278500    6033 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:53:9b:71:f9:47
	I0917 10:52:37.278832    6033 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:52:37.278858    6033 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:52:37.288402    6033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53745
	I0917 10:52:37.288918    6033 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:52:37.289369    6033 main.go:141] libmachine: Using API Version  1
	I0917 10:52:37.289396    6033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:52:37.289614    6033 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:52:37.289987    6033 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:52:37.290009    6033 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:52:37.298743    6033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53747
	I0917 10:52:37.299096    6033 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:52:37.299532    6033 main.go:141] libmachine: Using API Version  1
	I0917 10:52:37.299543    6033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:52:37.299749    6033 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:52:37.299861    6033 main.go:141] libmachine: (offline-docker-248000) Calling .GetState
	I0917 10:52:37.299950    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:37.300044    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:37.301144    6033 main.go:141] libmachine: (offline-docker-248000) Calling .DriverName
	I0917 10:52:37.321788    6033 out.go:177] * Deleting "offline-docker-248000" in hyperkit ...
	I0917 10:52:37.363971    6033 main.go:141] libmachine: (offline-docker-248000) Calling .Remove
	I0917 10:52:37.364144    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:37.364161    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:37.364241    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:37.365299    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:37.365362    6033 main.go:141] libmachine: (offline-docker-248000) DBG | waiting for graceful shutdown
	I0917 10:52:38.367009    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:38.370827    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:38.370838    6033 main.go:141] libmachine: (offline-docker-248000) DBG | waiting for graceful shutdown
	I0917 10:52:39.368793    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:39.368878    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:39.370578    6033 main.go:141] libmachine: (offline-docker-248000) DBG | waiting for graceful shutdown
	I0917 10:52:40.371607    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:40.371706    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:40.372355    6033 main.go:141] libmachine: (offline-docker-248000) DBG | waiting for graceful shutdown
	I0917 10:52:41.373633    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:41.373699    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:41.374398    6033 main.go:141] libmachine: (offline-docker-248000) DBG | waiting for graceful shutdown
	I0917 10:52:42.375661    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:42.375754    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6078
	I0917 10:52:42.376914    6033 main.go:141] libmachine: (offline-docker-248000) DBG | sending sigkill
	I0917 10:52:42.376924    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0917 10:52:42.389347    6033 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:53:9b:71:f9:47
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:53:9b:71:f9:47
	I0917 10:52:42.389361    6033 start.go:729] Will try again in 5 seconds ...
	I0917 10:52:42.403898    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:52:42 WARN : hyperkit: failed to read stdout: EOF
	I0917 10:52:42.403916    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:52:42 WARN : hyperkit: failed to read stderr: EOF
	I0917 10:52:47.390269    6033 start.go:360] acquireMachinesLock for offline-docker-248000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:53:40.344892    6033 start.go:364] duration metric: took 52.954322918s to acquireMachinesLock for "offline-docker-248000"
	I0917 10:53:40.344934    6033 start.go:93] Provisioning new machine with config: &{Name:offline-docker-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:53:40.344985    6033 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 10:53:40.365424    6033 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:53:40.365515    6033 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:53:40.365543    6033 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:53:40.374117    6033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53755
	I0917 10:53:40.374468    6033 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:53:40.374785    6033 main.go:141] libmachine: Using API Version  1
	I0917 10:53:40.374797    6033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:53:40.375013    6033 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:53:40.375121    6033 main.go:141] libmachine: (offline-docker-248000) Calling .GetMachineName
	I0917 10:53:40.375209    6033 main.go:141] libmachine: (offline-docker-248000) Calling .DriverName
	I0917 10:53:40.375335    6033 start.go:159] libmachine.API.Create for "offline-docker-248000" (driver="hyperkit")
	I0917 10:53:40.375350    6033 client.go:168] LocalClient.Create starting
	I0917 10:53:40.375374    6033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 10:53:40.375425    6033 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:40.375435    6033 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:40.375477    6033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 10:53:40.375516    6033 main.go:141] libmachine: Decoding PEM data...
	I0917 10:53:40.375527    6033 main.go:141] libmachine: Parsing certificate...
	I0917 10:53:40.375540    6033 main.go:141] libmachine: Running pre-create checks...
	I0917 10:53:40.375545    6033 main.go:141] libmachine: (offline-docker-248000) Calling .PreCreateCheck
	I0917 10:53:40.375671    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:40.375722    6033 main.go:141] libmachine: (offline-docker-248000) Calling .GetConfigRaw
	I0917 10:53:40.409501    6033 main.go:141] libmachine: Creating machine...
	I0917 10:53:40.409510    6033 main.go:141] libmachine: (offline-docker-248000) Calling .Create
	I0917 10:53:40.409611    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:40.409731    6033 main.go:141] libmachine: (offline-docker-248000) DBG | I0917 10:53:40.409603    6230 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:53:40.409831    6033 main.go:141] libmachine: (offline-docker-248000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 10:53:40.617968    6033 main.go:141] libmachine: (offline-docker-248000) DBG | I0917 10:53:40.617860    6230 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/id_rsa...
	I0917 10:53:40.790695    6033 main.go:141] libmachine: (offline-docker-248000) DBG | I0917 10:53:40.790615    6230 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/offline-docker-248000.rawdisk...
	I0917 10:53:40.790711    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Writing magic tar header
	I0917 10:53:40.790724    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Writing SSH key tar header
	I0917 10:53:40.791272    6033 main.go:141] libmachine: (offline-docker-248000) DBG | I0917 10:53:40.791231    6230 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000 ...
	I0917 10:53:41.153665    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:41.153684    6033 main.go:141] libmachine: (offline-docker-248000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/hyperkit.pid
	I0917 10:53:41.153718    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Using UUID 7790e273-ae7d-4e2f-bd32-949a37b09b40
	I0917 10:53:41.179493    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Generated MAC 8e:e3:b7:eb:40:9
	I0917 10:53:41.179509    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-248000
	I0917 10:53:41.179549    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7790e273-ae7d-4e2f-bd32-949a37b09b40", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011e330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0917 10:53:41.179587    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7790e273-ae7d-4e2f-bd32-949a37b09b40", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011e330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0917 10:53:41.179643    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7790e273-ae7d-4e2f-bd32-949a37b09b40", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/offline-docker-248000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/bzimage,
/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-248000"}
	I0917 10:53:41.179694    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7790e273-ae7d-4e2f-bd32-949a37b09b40 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/offline-docker-248000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machi
nes/offline-docker-248000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-248000"
	I0917 10:53:41.179705    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:53:41.182614    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 DEBUG: hyperkit: Pid is 6231
	I0917 10:53:41.183176    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 0
	I0917 10:53:41.183191    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:41.183302    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:41.184414    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:41.184474    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:41.184489    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:41.184508    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:41.184528    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:41.184541    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:41.184551    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:41.184562    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:41.184574    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:41.184585    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:41.184597    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:41.184611    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:41.184622    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:41.184637    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:41.184645    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:41.184652    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:41.184677    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:41.184691    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:41.184700    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:41.190787    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:53:41.198865    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/offline-docker-248000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:53:41.199702    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:53:41.199714    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:53:41.199740    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:53:41.199747    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:53:41.576587    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:53:41.576602    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:53:41.691221    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:53:41.691242    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:53:41.691257    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:53:41.691273    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:53:41.692152    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:53:41.692165    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:53:43.186666    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 1
	I0917 10:53:43.186683    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:43.186720    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:43.187603    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:43.187652    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:43.187661    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:43.187669    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:43.187680    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:43.187690    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:43.187699    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:43.187714    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:43.187727    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:43.187749    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:43.187757    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:43.187767    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:43.187777    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:43.187784    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:43.187791    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:43.187807    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:43.187818    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:43.187830    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:43.187839    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:45.189108    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 2
	I0917 10:53:45.189125    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:45.189191    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:45.190183    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:45.190214    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:45.190227    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:45.190252    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:45.190261    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:45.190269    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:45.190274    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:45.190287    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:45.190308    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:45.190318    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:45.190327    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:45.190336    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:45.190343    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:45.190357    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:45.190368    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:45.190411    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:45.190444    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:45.190451    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:45.190457    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:47.062884    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:47 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:53:47.062998    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:47 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:53:47.063007    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:47 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:53:47.083205    6033 main.go:141] libmachine: (offline-docker-248000) DBG | 2024/09/17 10:53:47 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:53:47.190614    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 3
	I0917 10:53:47.190641    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:47.190819    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:47.192468    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:47.192605    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:47.192618    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:47.192628    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:47.192635    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:47.192645    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:47.192653    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:47.192682    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:47.192699    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:47.192711    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:47.192723    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:47.192745    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:47.192762    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:47.192785    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:47.192803    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:47.192814    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:47.192825    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:47.192852    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:47.192871    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:49.193424    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 4
	I0917 10:53:49.193448    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:49.193516    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:49.194462    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:49.194512    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:49.194526    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:49.194533    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:49.194539    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:49.194557    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:49.194576    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:49.194585    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:49.194604    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:49.194618    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:49.194630    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:49.194644    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:49.194653    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:49.194660    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:49.194666    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:49.194672    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:49.194679    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:49.194686    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:49.194695    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:51.195521    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 5
	I0917 10:53:51.195535    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:51.195610    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:51.196502    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:51.196557    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:51.196569    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:51.196577    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:51.196583    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:51.196601    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:51.196607    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:51.196613    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:51.196619    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:51.196640    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:51.196653    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:51.196671    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:51.196685    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:51.196701    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:51.196710    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:51.196726    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:51.196738    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:51.196746    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:51.196752    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:53.198204    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 6
	I0917 10:53:53.198216    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:53.198335    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:53.199292    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:53.199347    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:53.199360    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:53.199373    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:53.199380    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:53.199386    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:53.199392    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:53.199407    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:53.199418    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:53.199426    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:53.199434    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:53.199462    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:53.199474    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:53.199484    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:53.199491    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:53.199498    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:53.199507    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:53.199523    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:53.199536    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:55.201213    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 7
	I0917 10:53:55.201228    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:55.201275    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:55.202174    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:55.202209    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:55.202221    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:55.202245    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:55.202258    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:55.202266    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:55.202273    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:55.202290    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:55.202302    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:55.202311    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:55.202319    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:55.202332    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:55.202356    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:55.202367    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:55.202377    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:55.202384    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:55.202391    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:55.202398    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:55.202405    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:57.204384    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 8
	I0917 10:53:57.204400    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:57.204472    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:57.205762    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:57.205811    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:57.205819    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:57.205828    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:57.205836    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:57.205842    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:57.205849    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:57.205856    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:57.205862    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:57.205875    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:57.205885    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:57.205893    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:57.205901    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:57.205916    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:57.205936    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:57.205945    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:57.205952    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:57.205961    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:57.205976    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:59.206613    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 9
	I0917 10:53:59.206629    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:59.206671    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:53:59.207525    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:53:59.207537    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:59.207545    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:59.207551    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:59.207564    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:59.207576    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:59.207590    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:59.207599    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:59.207610    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:59.207617    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:59.207629    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:59.207637    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:59.207643    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:59.207651    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:59.207660    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:59.207667    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:59.207685    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:59.207697    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:59.207715    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:01.207703    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 10
	I0917 10:54:01.207716    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:01.207846    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:01.208758    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:01.208809    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:01.208821    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:01.208829    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:01.208839    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:01.208846    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:01.208851    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:01.208857    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:01.208865    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:01.208872    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:01.208878    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:01.208884    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:01.208891    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:01.208898    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:01.208913    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:01.208929    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:01.208940    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:01.208948    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:01.208956    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:03.209437    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 11
	I0917 10:54:03.209463    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:03.209498    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:03.210381    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:03.210437    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:03.210452    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:03.210461    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:03.210467    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:03.210473    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:03.210478    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:03.210494    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:03.210506    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:03.210512    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:03.210521    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:03.210532    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:03.210540    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:03.210547    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:03.210554    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:03.210561    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:03.210568    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:03.210577    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:03.210583    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:05.210885    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 12
	I0917 10:54:05.210902    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:05.211015    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:05.211908    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:05.211949    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:05.211959    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:05.211970    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:05.211977    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:05.211984    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:05.211990    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:05.211996    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:05.212004    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:05.212010    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:05.212018    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:05.212028    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:05.212036    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:05.212044    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:05.212051    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:05.212059    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:05.212067    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:05.212082    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:05.212094    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:07.212569    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 13
	I0917 10:54:07.212585    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:07.212653    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:07.213541    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:07.213580    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:07.213591    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:07.213600    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:07.213607    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:07.213623    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:07.213636    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:07.213645    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:07.213650    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:07.213663    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:07.213676    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:07.213690    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:07.213698    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:07.213705    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:07.213713    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:07.213721    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:07.213729    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:07.213735    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:07.213741    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:09.215792    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 14
	I0917 10:54:09.215804    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:09.215863    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:09.216927    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:09.216979    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:09.216993    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:09.217009    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:09.217018    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:09.217026    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:09.217042    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:09.217049    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:09.217070    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:09.217089    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:09.217098    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:09.217106    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:09.217113    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:09.217131    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:09.217144    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:09.217152    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:09.217162    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:09.217173    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:09.217180    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:11.219147    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 15
	I0917 10:54:11.219163    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:11.219227    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:11.220106    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:11.220162    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:11.220172    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:11.220179    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:11.220184    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:11.220203    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:11.220213    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:11.220234    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:11.220244    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:11.220260    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:11.220272    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:11.220287    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:11.220300    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:11.220308    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:11.220315    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:11.220322    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:11.220330    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:11.220343    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:11.220351    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:13.219601    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 16
	I0917 10:54:13.219616    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:13.219693    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:13.220596    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:13.220649    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:13.220659    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:13.220672    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:13.220679    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:13.220685    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:13.220690    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:13.220697    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:13.220703    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:13.220712    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:13.220718    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:13.220725    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:13.220732    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:13.220737    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:13.220750    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:13.220761    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:13.220779    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:13.220787    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:13.220795    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:15.220147    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 17
	I0917 10:54:15.220169    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:15.220225    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:15.221132    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:15.221191    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:15.221206    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:15.221218    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:15.221228    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:15.221251    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:15.221259    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:15.221276    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:15.221290    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:15.221302    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:15.221325    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:15.221334    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:15.221342    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:15.221353    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:15.221368    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:15.221377    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:15.221387    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:15.221398    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:15.221408    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:17.221111    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 18
	I0917 10:54:17.221126    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:17.221210    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:17.222115    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:17.222165    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:17.222175    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:17.222182    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:17.222188    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:17.222205    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:17.222214    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:17.222221    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:17.222228    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:17.222244    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:17.222258    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:17.222274    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:17.222284    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:17.222292    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:17.222297    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:17.222309    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:17.222321    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:17.222329    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:17.222337    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:19.222103    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 19
	I0917 10:54:19.222117    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:19.222215    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:19.223118    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:19.223170    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:19.223185    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:19.223194    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:19.223200    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:19.223207    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:19.223220    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:19.223228    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:19.223236    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:19.223243    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:19.223250    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:19.223261    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:19.223269    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:19.223275    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:19.223283    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:19.223300    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:19.223312    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:19.223326    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:19.223339    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:21.222686    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 20
	I0917 10:54:21.222701    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:21.222770    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:21.223691    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:21.223746    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:21.223758    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:21.223766    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:21.223772    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:21.223779    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:21.223785    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:21.223792    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:21.223804    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:21.223811    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:21.223817    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:21.223825    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:21.223833    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:21.223841    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:21.223848    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:21.223855    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:21.223863    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:21.223870    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:21.223878    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:23.223409    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 21
	I0917 10:54:23.223425    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:23.223503    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:23.224380    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:23.224417    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:23.224429    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:23.224448    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:23.224455    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:23.224461    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:23.224468    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:23.224477    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:23.224483    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:23.224489    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:23.224494    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:23.224521    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:23.224535    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:23.224553    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:23.224565    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:23.224572    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:23.224580    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:23.224594    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:23.224602    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:25.225448    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 22
	I0917 10:54:25.225464    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:25.225521    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:25.226452    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:25.226512    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:25.226522    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:25.226541    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:25.226551    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:25.226560    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:25.226569    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:25.226595    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:25.226609    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:25.226616    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:25.226624    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:25.226633    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:25.226641    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:25.226647    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:25.226654    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:25.226669    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:25.226681    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:25.226689    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:25.226694    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:27.226995    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 23
	I0917 10:54:27.227008    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:27.227049    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:27.227941    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:27.227992    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:27.228007    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:27.228017    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:27.228028    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:27.228036    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:27.228041    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:27.228048    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:27.228054    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:27.228061    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:27.228076    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:27.228089    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:27.228100    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:27.228110    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:27.228117    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:27.228123    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:27.228139    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:27.228152    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:27.228161    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:29.228737    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 24
	I0917 10:54:29.228749    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:29.228811    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:29.229701    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:29.229755    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:29.229764    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:29.229777    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:29.229795    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:29.229811    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:29.229818    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:29.229826    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:29.229835    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:29.229859    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:29.229872    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:29.229880    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:29.229896    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:29.229908    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:29.229916    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:29.229922    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:29.229929    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:29.229937    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:29.229951    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:31.231024    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 25
	I0917 10:54:31.231039    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:31.231108    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:31.232081    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:31.232127    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:31.232146    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:31.232161    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:31.232171    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:31.232179    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:31.232186    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:31.232200    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:31.232213    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:31.232222    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:31.232229    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:31.232236    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:31.232244    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:31.232250    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:31.232266    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:31.232279    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:31.232287    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:31.232294    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:31.232312    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:33.232495    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 26
	I0917 10:54:33.232509    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:33.232596    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:33.233479    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:33.233517    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:33.233532    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:33.233544    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:33.233549    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:33.233558    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:33.233565    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:33.233573    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:33.233586    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:33.233593    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:33.233614    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:33.233628    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:33.233639    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:33.233649    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:33.233662    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:33.233669    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:33.233676    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:33.233683    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:33.233690    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:35.235344    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 27
	I0917 10:54:35.235357    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:35.235411    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:35.236304    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:35.236356    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:35.236366    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:35.236385    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:35.236393    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:35.236400    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:35.236409    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:35.236423    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:35.236434    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:35.236444    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:35.236451    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:35.236458    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:35.236475    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:35.236481    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:35.236490    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:35.236499    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:35.236506    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:35.236514    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:35.236522    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:37.238224    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 28
	I0917 10:54:37.238238    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:37.238289    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:37.239326    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:37.239364    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:37.239372    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:37.239382    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:37.239392    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:37.239408    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:37.239417    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:37.239425    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:37.239431    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:37.239439    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:37.239444    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:37.239461    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:37.239472    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:37.239479    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:37.239485    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:37.239492    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:37.239498    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:37.239504    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:37.239511    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:39.239634    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Attempt 29
	I0917 10:54:39.239648    6033 main.go:141] libmachine: (offline-docker-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:39.239721    6033 main.go:141] libmachine: (offline-docker-248000) DBG | hyperkit pid from json: 6231
	I0917 10:54:39.240627    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Searching for 8e:e3:b7:eb:40:9 in /var/db/dhcpd_leases ...
	I0917 10:54:39.240636    6033 main.go:141] libmachine: (offline-docker-248000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:39.240648    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:39.240661    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:39.240669    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:39.240685    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:39.240691    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:39.240703    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:39.240716    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:39.240727    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:39.240747    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:39.240794    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:39.240806    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:39.240816    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:39.240824    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:39.240840    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:39.240860    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:39.240881    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:39.240892    6033 main.go:141] libmachine: (offline-docker-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:41.242642    6033 client.go:171] duration metric: took 1m0.879151297s to LocalClient.Create
	I0917 10:54:43.244530    6033 start.go:128] duration metric: took 1m2.911653954s to createHost
	I0917 10:54:43.244544    6033 start.go:83] releasing machines lock for "offline-docker-248000", held for 1m2.911742835s
	W0917 10:54:43.244692    6033 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-248000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:e3:b7:eb:40:9
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-248000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:e3:b7:eb:40:9
	I0917 10:54:43.307889    6033 out.go:201] 
	W0917 10:54:43.349974    6033 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:e3:b7:eb:40:9
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:e3:b7:eb:40:9
	W0917 10:54:43.349998    6033 out.go:270] * 
	* 
	W0917 10:54:43.350612    6033 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:54:43.435810    6033 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-248000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-17 10:54:43.544918 -0700 PDT m=+3585.437831220
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-248000 -n offline-docker-248000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-248000 -n offline-docker-248000: exit status 7 (81.970119ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:54:43.624876    6247 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 10:54:43.624898    6247 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-248000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-248000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-248000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-248000: (5.286103026s)
--- FAIL: TestOffline (195.65s)

                                                
                                    
x
+
TestAddons/parallel/Registry (74.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.419226ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jzjxg" [d41ea116-b4f8-4b3d-ae06-2e78540cb794] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003207469s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nwrjv" [3f55bb41-a07d-4deb-a7a0-0034c2e839d0] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004376141s
addons_test.go:342: (dbg) Run:  kubectl --context addons-684000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-684000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-684000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.064347475s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-684000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 ip
2024/09/17 10:08:51 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-684000 -n addons-684000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p addons-684000 logs -n 25: (2.653757166s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-073000 | jenkins | v1.34.0 | 17 Sep 24 09:54 PDT |                     |
	|         | -p download-only-073000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-073000                                                                     | download-only-073000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| start   | -o=json --download-only                                                                     | download-only-498000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | -p download-only-498000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-498000                                                                     | download-only-498000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-073000                                                                     | download-only-073000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-498000                                                                     | download-only-498000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-109000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | binary-mirror-109000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49644                                                                      |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-109000                                                                     | binary-mirror-109000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| addons  | enable dashboard -p                                                                         | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | addons-684000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | addons-684000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-684000 --wait=true                                                                | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:58 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=hyperkit  --addons=ingress                                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-684000 addons disable                                                                | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 09:59 PDT | 17 Sep 24 09:59 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-684000 addons                                                                        | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-684000 addons                                                                        | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-684000 addons disable                                                                | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | -p addons-684000                                                                            |                      |         |         |                     |                     |
	| ip      | addons-684000 ip                                                                            | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	| addons  | addons-684000 addons disable                                                                | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-684000 ssh cat                                                                       | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT | 17 Sep 24 10:08 PDT |
	|         | /opt/local-path-provisioner/pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-684000 addons disable                                                                | addons-684000        | jenkins | v1.34.0 | 17 Sep 24 10:08 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 09:55:36
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 09:55:36.431281    2206 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:55:36.431523    2206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:36.431529    2206 out.go:358] Setting ErrFile to fd 2...
	I0917 09:55:36.431532    2206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:36.431708    2206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 09:55:36.433283    2206 out.go:352] Setting JSON to false
	I0917 09:55:36.456133    2206 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1503,"bootTime":1726590633,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 09:55:36.456270    2206 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 09:55:36.477699    2206 out.go:177] * [addons-684000] minikube v1.34.0 on Darwin 14.6.1
	I0917 09:55:36.519485    2206 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 09:55:36.519518    2206 notify.go:220] Checking for updates...
	I0917 09:55:36.561322    2206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 09:55:36.582329    2206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 09:55:36.603254    2206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:55:36.624544    2206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 09:55:36.645419    2206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 09:55:36.666742    2206 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:55:36.697478    2206 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 09:55:36.739161    2206 start.go:297] selected driver: hyperkit
	I0917 09:55:36.739185    2206 start.go:901] validating driver "hyperkit" against <nil>
	I0917 09:55:36.739198    2206 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 09:55:36.742778    2206 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:55:36.742902    2206 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 09:55:36.751346    2206 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 09:55:36.755181    2206 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:55:36.755199    2206 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 09:55:36.755227    2206 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 09:55:36.755451    2206 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 09:55:36.755489    2206 cni.go:84] Creating CNI manager for ""
	I0917 09:55:36.755536    2206 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 09:55:36.755542    2206 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 09:55:36.755608    2206 start.go:340] cluster config:
	{Name:addons-684000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:55:36.755694    2206 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:55:36.798278    2206 out.go:177] * Starting "addons-684000" primary control-plane node in "addons-684000" cluster
	I0917 09:55:36.819192    2206 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:36.819264    2206 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 09:55:36.819295    2206 cache.go:56] Caching tarball of preloaded images
	I0917 09:55:36.819535    2206 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 09:55:36.819554    2206 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 09:55:36.820057    2206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/config.json ...
	I0917 09:55:36.820097    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/config.json: {Name:mke236077614dd807a2d83af75a754323ae5d3d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:55:36.820819    2206 start.go:360] acquireMachinesLock for addons-684000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 09:55:36.821358    2206 start.go:364] duration metric: took 514.036µs to acquireMachinesLock for "addons-684000"
	I0917 09:55:36.821411    2206 start.go:93] Provisioning new machine with config: &{Name:addons-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:addons-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 09:55:36.821500    2206 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 09:55:36.843423    2206 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0917 09:55:36.843711    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:55:36.843789    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:55:36.853686    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49651
	I0917 09:55:36.854039    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:55:36.854426    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:55:36.854435    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:55:36.854692    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:55:36.854809    2206 main.go:141] libmachine: (addons-684000) Calling .GetMachineName
	I0917 09:55:36.854909    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:36.855009    2206 start.go:159] libmachine.API.Create for "addons-684000" (driver="hyperkit")
	I0917 09:55:36.855033    2206 client.go:168] LocalClient.Create starting
	I0917 09:55:36.855072    2206 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 09:55:37.025287    2206 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 09:55:37.132083    2206 main.go:141] libmachine: Running pre-create checks...
	I0917 09:55:37.132093    2206 main.go:141] libmachine: (addons-684000) Calling .PreCreateCheck
	I0917 09:55:37.132222    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:37.132402    2206 main.go:141] libmachine: (addons-684000) Calling .GetConfigRaw
	I0917 09:55:37.132827    2206 main.go:141] libmachine: Creating machine...
	I0917 09:55:37.132844    2206 main.go:141] libmachine: (addons-684000) Calling .Create
	I0917 09:55:37.132939    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:37.133068    2206 main.go:141] libmachine: (addons-684000) DBG | I0917 09:55:37.132930    2214 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 09:55:37.133165    2206 main.go:141] libmachine: (addons-684000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 09:55:37.400151    2206 main.go:141] libmachine: (addons-684000) DBG | I0917 09:55:37.400049    2214 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa...
	I0917 09:55:37.523449    2206 main.go:141] libmachine: (addons-684000) DBG | I0917 09:55:37.523358    2214 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/addons-684000.rawdisk...
	I0917 09:55:37.523470    2206 main.go:141] libmachine: (addons-684000) DBG | Writing magic tar header
	I0917 09:55:37.523490    2206 main.go:141] libmachine: (addons-684000) DBG | Writing SSH key tar header
	I0917 09:55:37.523860    2206 main.go:141] libmachine: (addons-684000) DBG | I0917 09:55:37.523813    2214 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000 ...
	I0917 09:55:38.037005    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:38.037029    2206 main.go:141] libmachine: (addons-684000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/hyperkit.pid
	I0917 09:55:38.037075    2206 main.go:141] libmachine: (addons-684000) DBG | Using UUID 603c15bb-875c-4179-b729-ba221f45cdb7
	I0917 09:55:38.304123    2206 main.go:141] libmachine: (addons-684000) DBG | Generated MAC e2:d3:e5:a7:fc:ca
	I0917 09:55:38.304147    2206 main.go:141] libmachine: (addons-684000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-684000
	I0917 09:55:38.304193    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"603c15bb-875c-4179-b729-ba221f45cdb7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 09:55:38.304217    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"603c15bb-875c-4179-b729-ba221f45cdb7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 09:55:38.304270    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "603c15bb-875c-4179-b729-ba221f45cdb7", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/addons-684000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/addons-684000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-684000"}
	I0917 09:55:38.304305    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 603c15bb-875c-4179-b729-ba221f45cdb7 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/addons-684000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-684000"
	I0917 09:55:38.304327    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 09:55:38.307230    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 DEBUG: hyperkit: Pid is 2221
	I0917 09:55:38.308165    2206 main.go:141] libmachine: (addons-684000) DBG | Attempt 0
	I0917 09:55:38.308174    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:38.308229    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:55:38.309204    2206 main.go:141] libmachine: (addons-684000) DBG | Searching for e2:d3:e5:a7:fc:ca in /var/db/dhcpd_leases ...
	I0917 09:55:38.324568    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 09:55:38.384248    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 09:55:38.384879    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 09:55:38.384901    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 09:55:38.384909    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 09:55:38.384916    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 09:55:38.909325    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 09:55:38.909341    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 09:55:39.025746    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 09:55:39.025764    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 09:55:39.025775    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 09:55:39.025783    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 09:55:39.026628    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:39 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 09:55:39.026639    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:39 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 09:55:40.310043    2206 main.go:141] libmachine: (addons-684000) DBG | Attempt 1
	I0917 09:55:40.310062    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:40.310145    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:55:40.311043    2206 main.go:141] libmachine: (addons-684000) DBG | Searching for e2:d3:e5:a7:fc:ca in /var/db/dhcpd_leases ...
	I0917 09:55:42.312850    2206 main.go:141] libmachine: (addons-684000) DBG | Attempt 2
	I0917 09:55:42.312864    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:42.312929    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:55:42.313763    2206 main.go:141] libmachine: (addons-684000) DBG | Searching for e2:d3:e5:a7:fc:ca in /var/db/dhcpd_leases ...
	I0917 09:55:44.313956    2206 main.go:141] libmachine: (addons-684000) DBG | Attempt 3
	I0917 09:55:44.313970    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:44.314049    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:55:44.314899    2206 main.go:141] libmachine: (addons-684000) DBG | Searching for e2:d3:e5:a7:fc:ca in /var/db/dhcpd_leases ...
	I0917 09:55:44.610159    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 09:55:44.610174    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 09:55:44.610199    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 09:55:44.628915    2206 main.go:141] libmachine: (addons-684000) DBG | 2024/09/17 09:55:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 09:55:46.315413    2206 main.go:141] libmachine: (addons-684000) DBG | Attempt 4
	I0917 09:55:46.315429    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:46.315517    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:55:46.316366    2206 main.go:141] libmachine: (addons-684000) DBG | Searching for e2:d3:e5:a7:fc:ca in /var/db/dhcpd_leases ...
	I0917 09:55:48.317923    2206 main.go:141] libmachine: (addons-684000) DBG | Attempt 5
	I0917 09:55:48.317949    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:48.318098    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:55:48.319759    2206 main.go:141] libmachine: (addons-684000) DBG | Searching for e2:d3:e5:a7:fc:ca in /var/db/dhcpd_leases ...
	I0917 09:55:48.319850    2206 main.go:141] libmachine: (addons-684000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I0917 09:55:48.319889    2206 main.go:141] libmachine: (addons-684000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 09:55:48.319916    2206 main.go:141] libmachine: (addons-684000) DBG | Found match: e2:d3:e5:a7:fc:ca
	I0917 09:55:48.319922    2206 main.go:141] libmachine: (addons-684000) DBG | IP: 192.169.0.2
	I0917 09:55:48.319960    2206 main.go:141] libmachine: (addons-684000) Calling .GetConfigRaw
	I0917 09:55:48.321195    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:48.321351    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:48.321494    2206 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 09:55:48.321509    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:55:48.321625    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:55:48.321699    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:55:48.322809    2206 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 09:55:48.322819    2206 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 09:55:48.322824    2206 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 09:55:48.322827    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:48.323027    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:48.323179    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:48.323324    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:48.323424    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:48.324142    2206 main.go:141] libmachine: Using SSH client type: native
	I0917 09:55:48.324346    2206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10ecc820] 0x10ecf500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 09:55:48.324354    2206 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 09:55:49.389261    2206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 09:55:49.389274    2206 main.go:141] libmachine: Detecting the provisioner...
	I0917 09:55:49.389279    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:49.389414    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:49.389510    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.389593    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.389688    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:49.389878    2206 main.go:141] libmachine: Using SSH client type: native
	I0917 09:55:49.390013    2206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10ecc820] 0x10ecf500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 09:55:49.390021    2206 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 09:55:49.453051    2206 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 09:55:49.453115    2206 main.go:141] libmachine: found compatible host: buildroot
	I0917 09:55:49.453121    2206 main.go:141] libmachine: Provisioning with buildroot...
	I0917 09:55:49.453131    2206 main.go:141] libmachine: (addons-684000) Calling .GetMachineName
	I0917 09:55:49.453262    2206 buildroot.go:166] provisioning hostname "addons-684000"
	I0917 09:55:49.453273    2206 main.go:141] libmachine: (addons-684000) Calling .GetMachineName
	I0917 09:55:49.453348    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:49.453465    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:49.453554    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.453648    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.453735    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:49.453858    2206 main.go:141] libmachine: Using SSH client type: native
	I0917 09:55:49.454001    2206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10ecc820] 0x10ecf500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 09:55:49.454009    2206 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-684000 && echo "addons-684000" | sudo tee /etc/hostname
	I0917 09:55:49.528819    2206 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-684000
	
	I0917 09:55:49.528848    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:49.528987    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:49.529092    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.529211    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.529295    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:49.529433    2206 main.go:141] libmachine: Using SSH client type: native
	I0917 09:55:49.529573    2206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10ecc820] 0x10ecf500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 09:55:49.529589    2206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-684000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-684000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-684000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 09:55:49.599344    2206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 09:55:49.599365    2206 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 09:55:49.599383    2206 buildroot.go:174] setting up certificates
	I0917 09:55:49.599391    2206 provision.go:84] configureAuth start
	I0917 09:55:49.599400    2206 main.go:141] libmachine: (addons-684000) Calling .GetMachineName
	I0917 09:55:49.599531    2206 main.go:141] libmachine: (addons-684000) Calling .GetIP
	I0917 09:55:49.599631    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:49.599709    2206 provision.go:143] copyHostCerts
	I0917 09:55:49.599810    2206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 09:55:49.600070    2206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 09:55:49.600237    2206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 09:55:49.600374    2206 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.addons-684000 san=[127.0.0.1 192.169.0.2 addons-684000 localhost minikube]
	I0917 09:55:49.730621    2206 provision.go:177] copyRemoteCerts
	I0917 09:55:49.730687    2206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 09:55:49.730704    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:49.730849    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:49.730948    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.731047    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:49.731140    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:55:49.771014    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 09:55:49.789248    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 09:55:49.808310    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 09:55:49.827470    2206 provision.go:87] duration metric: took 228.0625ms to configureAuth
	I0917 09:55:49.827485    2206 buildroot.go:189] setting minikube options for container-runtime
	I0917 09:55:49.827618    2206 config.go:182] Loaded profile config "addons-684000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:55:49.827631    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:49.827762    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:49.827864    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:49.827991    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.828105    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.828190    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:49.828303    2206 main.go:141] libmachine: Using SSH client type: native
	I0917 09:55:49.828431    2206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10ecc820] 0x10ecf500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 09:55:49.828439    2206 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 09:55:49.892186    2206 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 09:55:49.892197    2206 buildroot.go:70] root file system type: tmpfs
	I0917 09:55:49.892270    2206 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 09:55:49.892282    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:49.892415    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:49.892491    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.892577    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.892659    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:49.892783    2206 main.go:141] libmachine: Using SSH client type: native
	I0917 09:55:49.892925    2206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10ecc820] 0x10ecf500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 09:55:49.892969    2206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 09:55:49.966676    2206 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 09:55:49.966703    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:49.966846    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:49.966947    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.967028    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:49.967103    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:49.967222    2206 main.go:141] libmachine: Using SSH client type: native
	I0917 09:55:49.967374    2206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10ecc820] 0x10ecf500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 09:55:49.967386    2206 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 09:55:51.523265    2206 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 09:55:51.523280    2206 main.go:141] libmachine: Checking connection to Docker...
	I0917 09:55:51.523291    2206 main.go:141] libmachine: (addons-684000) Calling .GetURL
	I0917 09:55:51.523433    2206 main.go:141] libmachine: Docker is up and running!
	I0917 09:55:51.523441    2206 main.go:141] libmachine: Reticulating splines...
	I0917 09:55:51.523456    2206 client.go:171] duration metric: took 14.66827924s to LocalClient.Create
	I0917 09:55:51.523469    2206 start.go:167] duration metric: took 14.668333152s to libmachine.API.Create "addons-684000"
	I0917 09:55:51.523479    2206 start.go:293] postStartSetup for "addons-684000" (driver="hyperkit")
	I0917 09:55:51.523487    2206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 09:55:51.523496    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:51.523647    2206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 09:55:51.523660    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:51.523763    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:51.523853    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:51.523942    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:51.524036    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:55:51.569883    2206 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 09:55:51.574727    2206 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 09:55:51.574745    2206 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 09:55:51.574851    2206 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 09:55:51.574901    2206 start.go:296] duration metric: took 51.416039ms for postStartSetup
	I0917 09:55:51.574923    2206 main.go:141] libmachine: (addons-684000) Calling .GetConfigRaw
	I0917 09:55:51.575512    2206 main.go:141] libmachine: (addons-684000) Calling .GetIP
	I0917 09:55:51.575662    2206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/config.json ...
	I0917 09:55:51.575979    2206 start.go:128] duration metric: took 14.754337637s to createHost
	I0917 09:55:51.575998    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:51.576091    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:51.576180    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:51.576264    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:51.576349    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:51.576453    2206 main.go:141] libmachine: Using SSH client type: native
	I0917 09:55:51.576581    2206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10ecc820] 0x10ecf500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 09:55:51.576588    2206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 09:55:51.642219    2206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726592150.705791756
	
	I0917 09:55:51.642231    2206 fix.go:216] guest clock: 1726592150.705791756
	I0917 09:55:51.642236    2206 fix.go:229] Guest: 2024-09-17 09:55:50.705791756 -0700 PDT Remote: 2024-09-17 09:55:51.575987 -0700 PDT m=+15.180005501 (delta=-870.195244ms)
	I0917 09:55:51.642257    2206 fix.go:200] guest clock delta is within tolerance: -870.195244ms
	I0917 09:55:51.642261    2206 start.go:83] releasing machines lock for "addons-684000", held for 14.820759014s
	I0917 09:55:51.642277    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:51.642404    2206 main.go:141] libmachine: (addons-684000) Calling .GetIP
	I0917 09:55:51.642504    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:51.642796    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:51.642892    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:55:51.642995    2206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 09:55:51.643026    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:51.643038    2206 ssh_runner.go:195] Run: cat /version.json
	I0917 09:55:51.643050    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:55:51.643126    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:51.643145    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:55:51.643226    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:51.643242    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:55:51.643302    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:51.643328    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:55:51.643381    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:55:51.643405    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:55:51.731552    2206 ssh_runner.go:195] Run: systemctl --version
	I0917 09:55:51.736596    2206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 09:55:51.741092    2206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 09:55:51.741158    2206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 09:55:51.753832    2206 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 09:55:51.753846    2206 start.go:495] detecting cgroup driver to use...
	I0917 09:55:51.753943    2206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 09:55:51.768408    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 09:55:51.776744    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 09:55:51.785269    2206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 09:55:51.785329    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 09:55:51.793670    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 09:55:51.801795    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 09:55:51.810156    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 09:55:51.818900    2206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 09:55:51.827302    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 09:55:51.835451    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 09:55:51.843444    2206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 09:55:51.851541    2206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 09:55:51.859087    2206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 09:55:51.866451    2206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:55:51.962472    2206 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 09:55:51.981154    2206 start.go:495] detecting cgroup driver to use...
	I0917 09:55:51.981237    2206 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 09:55:51.997798    2206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 09:55:52.013057    2206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 09:55:52.029084    2206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 09:55:52.039537    2206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 09:55:52.049563    2206 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 09:55:52.077012    2206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 09:55:52.087202    2206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 09:55:52.102914    2206 ssh_runner.go:195] Run: which cri-dockerd
	I0917 09:55:52.105862    2206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 09:55:52.113025    2206 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 09:55:52.126922    2206 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 09:55:52.219828    2206 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 09:55:52.321264    2206 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 09:55:52.321341    2206 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 09:55:52.335491    2206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:55:52.426804    2206 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 09:55:54.730718    2206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.303873651s)
	I0917 09:55:54.730796    2206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 09:55:54.741223    2206 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 09:55:54.756248    2206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 09:55:54.767910    2206 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 09:55:54.862317    2206 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 09:55:54.952499    2206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:55:55.060004    2206 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 09:55:55.073601    2206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 09:55:55.084505    2206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:55:55.183607    2206 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 09:55:55.241410    2206 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 09:55:55.241511    2206 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 09:55:55.245906    2206 start.go:563] Will wait 60s for crictl version
	I0917 09:55:55.245967    2206 ssh_runner.go:195] Run: which crictl
	I0917 09:55:55.248918    2206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 09:55:55.275962    2206 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 09:55:55.276053    2206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 09:55:55.291100    2206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 09:55:55.352361    2206 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 09:55:55.352418    2206 main.go:141] libmachine: (addons-684000) Calling .GetIP
	I0917 09:55:55.352826    2206 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 09:55:55.357328    2206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 09:55:55.367081    2206 kubeadm.go:883] updating cluster {Name:addons-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:addons-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 09:55:55.367156    2206 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:55.367223    2206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 09:55:55.379461    2206 docker.go:685] Got preloaded images: 
	I0917 09:55:55.379474    2206 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0917 09:55:55.379523    2206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 09:55:55.387035    2206 ssh_runner.go:195] Run: which lz4
	I0917 09:55:55.389869    2206 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 09:55:55.392826    2206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 09:55:55.392843    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0917 09:55:56.375930    2206 docker.go:649] duration metric: took 986.104438ms to copy over tarball
	I0917 09:55:56.376017    2206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 09:55:58.741304    2206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.365241841s)
	I0917 09:55:58.741318    2206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 09:55:58.766091    2206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 09:55:58.774011    2206 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0917 09:55:58.787765    2206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:55:58.886296    2206 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 09:56:01.268186    2206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.381849892s)
	I0917 09:56:01.268295    2206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 09:56:01.280595    2206 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 09:56:01.280615    2206 cache_images.go:84] Images are preloaded, skipping loading
	I0917 09:56:01.280622    2206 kubeadm.go:934] updating node { 192.169.0.2 8443 v1.31.1 docker true true} ...
	I0917 09:56:01.280707    2206 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-684000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 09:56:01.280791    2206 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 09:56:01.315229    2206 cni.go:84] Creating CNI manager for ""
	I0917 09:56:01.315245    2206 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 09:56:01.315255    2206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 09:56:01.315270    2206 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-684000 NodeName:addons-684000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 09:56:01.315368    2206 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-684000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 09:56:01.315439    2206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 09:56:01.322889    2206 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 09:56:01.322949    2206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 09:56:01.330630    2206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 09:56:01.344063    2206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 09:56:01.357470    2206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0917 09:56:01.370996    2206 ssh_runner.go:195] Run: grep 192.169.0.2	control-plane.minikube.internal$ /etc/hosts
	I0917 09:56:01.373814    2206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 09:56:01.382963    2206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:01.479728    2206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 09:56:01.495053    2206 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000 for IP: 192.169.0.2
	I0917 09:56:01.495065    2206 certs.go:194] generating shared ca certs ...
	I0917 09:56:01.495075    2206 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.495281    2206 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 09:56:01.610928    2206 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt ...
	I0917 09:56:01.610942    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt: {Name:mk1666a467f31694f6995a21a84e089bc1051efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.611289    2206 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key ...
	I0917 09:56:01.611298    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key: {Name:mk6d424490342367e69ecdbce93d8d7529b0c575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.611505    2206 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 09:56:01.665564    2206 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt ...
	I0917 09:56:01.665575    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt: {Name:mkb191174386ca424b9d74ecd302eb1f75ab42c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.665891    2206 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key ...
	I0917 09:56:01.665902    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key: {Name:mk0d5a14b0cb6eda81d70141575619bc17d50ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.666111    2206 certs.go:256] generating profile certs ...
	I0917 09:56:01.666168    2206 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.key
	I0917 09:56:01.666182    2206 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt with IP's: []
	I0917 09:56:01.713038    2206 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt ...
	I0917 09:56:01.713050    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: {Name:mkd8c48fa23ec9342797004647311b3c717c2b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.713313    2206 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.key ...
	I0917 09:56:01.713320    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.key: {Name:mk7dfc2bed371a7c8fd353522ebb11b1d55b27e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.713517    2206 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.key.d8607295
	I0917 09:56:01.713539    2206 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.crt.d8607295 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.2]
	I0917 09:56:01.858673    2206 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.crt.d8607295 ...
	I0917 09:56:01.858688    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.crt.d8607295: {Name:mk16cafa4ad3658849fb1822e453e34122baab00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.859014    2206 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.key.d8607295 ...
	I0917 09:56:01.859023    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.key.d8607295: {Name:mk296c629373223832a9f6317c05567e372d0cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.859230    2206 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.crt.d8607295 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.crt
	I0917 09:56:01.859399    2206 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.key.d8607295 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.key
	I0917 09:56:01.859569    2206 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/proxy-client.key
	I0917 09:56:01.859590    2206 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/proxy-client.crt with IP's: []
	I0917 09:56:01.915921    2206 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/proxy-client.crt ...
	I0917 09:56:01.915933    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/proxy-client.crt: {Name:mk50d6ce29242ad9a33e3e5700c2f6987cb978ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.916283    2206 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/proxy-client.key ...
	I0917 09:56:01.916290    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/proxy-client.key: {Name:mk7bdf991c928bf4e2d4fd822f53b5ffeb43f48a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:01.916750    2206 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 09:56:01.916809    2206 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 09:56:01.916839    2206 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 09:56:01.916867    2206 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 09:56:01.917344    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 09:56:01.938818    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 09:56:01.961521    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 09:56:01.982270    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 09:56:02.002689    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 09:56:02.022398    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 09:56:02.041067    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 09:56:02.061791    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 09:56:02.081356    2206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 09:56:02.100828    2206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 09:56:02.115044    2206 ssh_runner.go:195] Run: openssl version
	I0917 09:56:02.119210    2206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 09:56:02.128466    2206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 09:56:02.131777    2206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17  2024 /usr/share/ca-certificates/minikubeCA.pem
	I0917 09:56:02.131814    2206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 09:56:02.135965    2206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 09:56:02.146644    2206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 09:56:02.153191    2206 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 09:56:02.153234    2206 kubeadm.go:392] StartCluster: {Name:addons-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:56:02.153340    2206 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 09:56:02.170053    2206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 09:56:02.182820    2206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 09:56:02.193086    2206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 09:56:02.202094    2206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 09:56:02.202103    2206 kubeadm.go:157] found existing configuration files:
	
	I0917 09:56:02.202159    2206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 09:56:02.210073    2206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 09:56:02.210120    2206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 09:56:02.218030    2206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 09:56:02.225696    2206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 09:56:02.225740    2206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 09:56:02.233821    2206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 09:56:02.241564    2206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 09:56:02.241622    2206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 09:56:02.249749    2206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 09:56:02.257489    2206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 09:56:02.257535    2206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 09:56:02.265726    2206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 09:56:02.304257    2206 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 09:56:02.304306    2206 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 09:56:02.374189    2206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 09:56:02.374276    2206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 09:56:02.374367    2206 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 09:56:02.383892    2206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 09:56:02.424235    2206 out.go:235]   - Generating certificates and keys ...
	I0917 09:56:02.424335    2206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 09:56:02.424389    2206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 09:56:02.559068    2206 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 09:56:02.639779    2206 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 09:56:02.875995    2206 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 09:56:02.942438    2206 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 09:56:03.233265    2206 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 09:56:03.233403    2206 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-684000 localhost] and IPs [192.169.0.2 127.0.0.1 ::1]
	I0917 09:56:03.336744    2206 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 09:56:03.336871    2206 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-684000 localhost] and IPs [192.169.0.2 127.0.0.1 ::1]
	I0917 09:56:03.411372    2206 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 09:56:03.673549    2206 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 09:56:04.135414    2206 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 09:56:04.135541    2206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 09:56:04.449423    2206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 09:56:05.032780    2206 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 09:56:05.271327    2206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 09:56:05.385267    2206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 09:56:05.450003    2206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 09:56:05.450411    2206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 09:56:05.452859    2206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 09:56:05.479241    2206 out.go:235]   - Booting up control plane ...
	I0917 09:56:05.479325    2206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 09:56:05.479385    2206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 09:56:05.479450    2206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 09:56:05.479536    2206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 09:56:05.479612    2206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 09:56:05.479651    2206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 09:56:05.576116    2206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 09:56:05.576207    2206 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 09:56:06.081169    2206 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.457293ms
	I0917 09:56:06.081240    2206 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 09:56:10.581147    2206 kubeadm.go:310] [api-check] The API server is healthy after 4.502541953s
	I0917 09:56:10.593678    2206 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 09:56:10.600804    2206 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 09:56:10.613331    2206 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 09:56:10.613501    2206 kubeadm.go:310] [mark-control-plane] Marking the node addons-684000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 09:56:10.627067    2206 kubeadm.go:310] [bootstrap-token] Using token: i4lhi4.oxt7iixgwrg79x8j
	I0917 09:56:10.653147    2206 out.go:235]   - Configuring RBAC rules ...
	I0917 09:56:10.653268    2206 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 09:56:10.700604    2206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 09:56:10.705883    2206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 09:56:10.708221    2206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 09:56:10.710396    2206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 09:56:10.712450    2206 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 09:56:10.989951    2206 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 09:56:11.414333    2206 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 09:56:11.988265    2206 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 09:56:11.989033    2206 kubeadm.go:310] 
	I0917 09:56:11.989093    2206 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 09:56:11.989104    2206 kubeadm.go:310] 
	I0917 09:56:11.989169    2206 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 09:56:11.989175    2206 kubeadm.go:310] 
	I0917 09:56:11.989193    2206 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 09:56:11.989240    2206 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 09:56:11.989284    2206 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 09:56:11.989293    2206 kubeadm.go:310] 
	I0917 09:56:11.989331    2206 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 09:56:11.989337    2206 kubeadm.go:310] 
	I0917 09:56:11.989368    2206 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 09:56:11.989383    2206 kubeadm.go:310] 
	I0917 09:56:11.989422    2206 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 09:56:11.989475    2206 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 09:56:11.989520    2206 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 09:56:11.989526    2206 kubeadm.go:310] 
	I0917 09:56:11.989606    2206 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 09:56:11.989673    2206 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 09:56:11.989678    2206 kubeadm.go:310] 
	I0917 09:56:11.989741    2206 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i4lhi4.oxt7iixgwrg79x8j \
	I0917 09:56:11.989820    2206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2ee1fec0c6b6f3262ed73499514c43f41e49a477865f53aa9f7a9ae5b901abf0 \
	I0917 09:56:11.989836    2206 kubeadm.go:310] 	--control-plane 
	I0917 09:56:11.989842    2206 kubeadm.go:310] 
	I0917 09:56:11.989903    2206 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 09:56:11.989907    2206 kubeadm.go:310] 
	I0917 09:56:11.989971    2206 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i4lhi4.oxt7iixgwrg79x8j \
	I0917 09:56:11.990055    2206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2ee1fec0c6b6f3262ed73499514c43f41e49a477865f53aa9f7a9ae5b901abf0 
	I0917 09:56:11.990440    2206 kubeadm.go:310] W0917 16:56:01.372827    1584 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 09:56:11.990666    2206 kubeadm.go:310] W0917 16:56:01.374200    1584 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 09:56:11.990753    2206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 09:56:11.990762    2206 cni.go:84] Creating CNI manager for ""
	I0917 09:56:11.990771    2206 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 09:56:12.015410    2206 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 09:56:12.052318    2206 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 09:56:12.060597    2206 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 09:56:12.079947    2206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 09:56:12.080031    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:12.080036    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-684000 minikube.k8s.io/updated_at=2024_09_17T09_56_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-684000 minikube.k8s.io/primary=true
	I0917 09:56:12.094264    2206 ops.go:34] apiserver oom_adj: -16
	I0917 09:56:12.161431    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:12.663245    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:13.163131    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:13.663114    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:14.163219    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:14.663077    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:15.162692    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:15.661672    2206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 09:56:15.728060    2206 kubeadm.go:1113] duration metric: took 3.648056309s to wait for elevateKubeSystemPrivileges
	I0917 09:56:15.728084    2206 kubeadm.go:394] duration metric: took 13.57473354s to StartCluster
	I0917 09:56:15.728101    2206 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:15.728291    2206 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 09:56:15.728532    2206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:56:15.728846    2206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 09:56:15.728853    2206 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 09:56:15.728882    2206 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 09:56:15.728968    2206 addons.go:69] Setting yakd=true in profile "addons-684000"
	I0917 09:56:15.728980    2206 addons.go:69] Setting default-storageclass=true in profile "addons-684000"
	I0917 09:56:15.728995    2206 addons.go:234] Setting addon yakd=true in "addons-684000"
	I0917 09:56:15.728997    2206 config.go:182] Loaded profile config "addons-684000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:56:15.728988    2206 addons.go:69] Setting ingress=true in profile "addons-684000"
	I0917 09:56:15.729017    2206 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-684000"
	I0917 09:56:15.729022    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.728992    2206 addons.go:69] Setting registry=true in profile "addons-684000"
	I0917 09:56:15.729038    2206 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-684000"
	I0917 09:56:15.729000    2206 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-684000"
	I0917 09:56:15.729048    2206 addons.go:234] Setting addon registry=true in "addons-684000"
	I0917 09:56:15.729050    2206 addons.go:234] Setting addon ingress=true in "addons-684000"
	I0917 09:56:15.729056    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729038    2206 addons.go:69] Setting inspektor-gadget=true in profile "addons-684000"
	I0917 09:56:15.729108    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729095    2206 addons.go:69] Setting gcp-auth=true in profile "addons-684000"
	I0917 09:56:15.729127    2206 addons.go:234] Setting addon inspektor-gadget=true in "addons-684000"
	I0917 09:56:15.729007    2206 addons.go:69] Setting cloud-spanner=true in profile "addons-684000"
	I0917 09:56:15.729149    2206 addons.go:234] Setting addon cloud-spanner=true in "addons-684000"
	I0917 09:56:15.729153    2206 mustload.go:65] Loading cluster: addons-684000
	I0917 09:56:15.729163    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729146    2206 addons.go:69] Setting ingress-dns=true in profile "addons-684000"
	I0917 09:56:15.729205    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729208    2206 addons.go:234] Setting addon ingress-dns=true in "addons-684000"
	I0917 09:56:15.729263    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729351    2206 addons.go:69] Setting metrics-server=true in profile "addons-684000"
	I0917 09:56:15.729393    2206 config.go:182] Loaded profile config "addons-684000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 09:56:15.729400    2206 addons.go:234] Setting addon metrics-server=true in "addons-684000"
	I0917 09:56:15.729376    2206 addons.go:69] Setting helm-tiller=true in profile "addons-684000"
	I0917 09:56:15.729431    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.729432    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.729445    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.729448    2206 addons.go:234] Setting addon helm-tiller=true in "addons-684000"
	I0917 09:56:15.729454    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.729459    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.729461    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729473    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.729510    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729602    2206 addons.go:69] Setting storage-provisioner=true in profile "addons-684000"
	I0917 09:56:15.729612    2206 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-684000"
	I0917 09:56:15.729619    2206 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-684000"
	I0917 09:56:15.729623    2206 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-684000"
	I0917 09:56:15.729621    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.729632    2206 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-684000"
	I0917 09:56:15.729710    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729586    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.729816    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.729133    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.730939    2206 addons.go:69] Setting volcano=true in profile "addons-684000"
	I0917 09:56:15.730943    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.731122    2206 addons.go:234] Setting addon volcano=true in "addons-684000"
	I0917 09:56:15.731192    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.730875    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.731300    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.731657    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.731686    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.729613    2206 addons.go:234] Setting addon storage-provisioner=true in "addons-684000"
	I0917 09:56:15.731861    2206 addons.go:69] Setting volumesnapshots=true in profile "addons-684000"
	I0917 09:56:15.731852    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.731968    2206 addons.go:234] Setting addon volumesnapshots=true in "addons-684000"
	I0917 09:56:15.732044    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.732051    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.732046    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.732171    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.731826    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.732280    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.732345    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.734432    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.734947    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.734896    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.734976    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.734984    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.734856    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.735222    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.735223    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.735254    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.735290    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.735433    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.735560    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.744830    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49674
	I0917 09:56:15.748850    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49676
	I0917 09:56:15.749233    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.749395    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49677
	I0917 09:56:15.753766    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49681
	I0917 09:56:15.753790    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.753864    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49680
	I0917 09:56:15.753743    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.755122    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.755313    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.755156    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49682
	I0917 09:56:15.755376    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49683
	I0917 09:56:15.755596    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.755648    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.755708    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.755761    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.757747    2206 out.go:177] * Verifying Kubernetes components...
	I0917 09:56:15.759247    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49688
	I0917 09:56:15.759254    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.759276    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.779724    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.779724    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.780668    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49694
	I0917 09:56:15.781337    2206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 09:56:15.781817    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.782097    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.782139    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.782152    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49692
	I0917 09:56:15.782111    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49690
	I0917 09:56:15.782224    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.782283    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.782278    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.782350    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49689
	I0917 09:56:15.782365    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.782407    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.782469    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49691
	I0917 09:56:15.782497    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.785093    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49693
	I0917 09:56:15.787424    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.787474    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.787578    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.787217    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.787611    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.787193    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.787223    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.787296    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.787640    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.785337    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.787507    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.787673    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.787672    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.787774    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.788104    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.788201    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.788228    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.788334    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.788650    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.788656    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.788660    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.788669    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.788583    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.788682    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.788619    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.788753    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.788773    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.788807    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.789001    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.789061    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.789065    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.789073    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.788844    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.789105    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.788935    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.792130    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.792145    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.792154    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.792187    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.792214    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.792231    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.792238    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.792239    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.792251    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.792263    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.792263    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.792273    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.792288    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.792311    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.793945    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.794498    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.794483    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.794586    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.794612    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.795070    2206 addons.go:234] Setting addon default-storageclass=true in "addons-684000"
	I0917 09:56:15.795183    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.795414    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49702
	I0917 09:56:15.795458    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.798003    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.797555    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.798658    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49703
	I0917 09:56:15.798801    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.799239    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.800743    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.802355    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.802468    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.802534    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.802732    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.802785    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.802789    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.802807    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.802974    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.802996    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49706
	I0917 09:56:15.803531    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.805782    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.806070    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.806333    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.808784    2206 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-684000"
	I0917 09:56:15.808805    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.809051    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.809103    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49708
	I0917 09:56:15.809258    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.810438    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.811363    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.811457    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.811630    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.816087    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.816847    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.816860    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49711
	I0917 09:56:15.816167    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:15.816868    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.814520    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49710
	I0917 09:56:15.816826    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.818066    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.818235    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.818588    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.818622    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49712
	I0917 09:56:15.818685    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.818777    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.818902    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.818958    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.818964    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.819121    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:15.821886    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.822271    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.822384    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.822506    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.822592    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49716
	I0917 09:56:15.822648    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:15.822744    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.824549    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.826625    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.826644    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.826654    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.826665    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.826664    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49718
	I0917 09:56:15.826570    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.826547    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.826697    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49719
	I0917 09:56:15.827883    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.827847    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.827913    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.828073    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.828118    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.828280    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.828270    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.828447    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.828363    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.828523    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.833399    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.833411    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.833419    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.833435    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.833429    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.833471    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.833446    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.834998    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.835109    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.847885    2206 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 09:56:15.835173    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.835253    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.835232    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.835348    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49722
	I0917 09:56:15.835545    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.840507    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49724
	I0917 09:56:15.840632    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49723
	I0917 09:56:15.840731    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.840671    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.841156    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49725
	I0917 09:56:15.843518    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49726
	I0917 09:56:15.844920    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49727
	I0917 09:56:15.847054    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49728
	I0917 09:56:15.848256    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.848512    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.869170    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.848761    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.848798    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.869199    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.848873    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:15.849535    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.849539    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.849543    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.849570    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.849579    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.849612    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49736
	I0917 09:56:15.849617    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.849688    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.849794    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.858319    2206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 09:56:15.868922    2206 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 09:56:15.869511    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.869532    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.889745    2206 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 09:56:15.889748    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 09:56:15.890386    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.890417    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.890419    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.890420    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.890474    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.890603    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.890683    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.890687    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:15.910851    2206 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 09:56:15.911001    2206 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 09:56:15.911016    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.911035    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.911023    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:15.911045    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.911060    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.911061    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.911067    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.911070    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.911238    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.911287    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.911362    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.911379    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.911511    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:15.931867    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 09:56:15.953261    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:15.932284    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.945902    2206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 09:56:15.952887    2206 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 09:56:15.932178    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:15.953281    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.953355    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:15.953469    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.953483    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.953488    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.953494    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:15.953508    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.953517    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:15.954596    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.954625    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.974324    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:15.974391    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:16.011064    2206 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 09:56:16.011199    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 09:56:16.011217    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.011256    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.011282    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.011383    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.011443    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.011573    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:16.011676    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:16.011710    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:16.048454    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.048466    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.011735    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:16.011753    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.011766    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.011782    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:16.011812    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:16.011828    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:16.011844    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:16.012884    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:16.012884    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:16.012932    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:16.048108    2206 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 09:56:16.048610    2206 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 09:56:16.048632    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:16.048535    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.048713    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:16.048638    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.049032    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.049079    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:16.049034    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:16.049058    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.050968    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.051212    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.051834    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.051912    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.053865    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.054922    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:16.054945    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:16.060968    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49740
	I0917 09:56:16.061734    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49741
	I0917 09:56:16.068839    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 09:56:16.068863    2206 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 09:56:16.068937    2206 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 09:56:16.069406    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.069688    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:16.069701    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:16.105925    2206 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 09:56:16.105946    2206 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 09:56:16.133748    2206 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 09:56:16.142883    2206 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 09:56:16.142880    2206 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 09:56:16.142880    2206 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 09:56:16.143050    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 09:56:16.145052    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 09:56:16.180122    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.180236    2206 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 09:56:16.180253    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.180588    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:16.180659    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:16.191716    2206 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0917 09:56:16.280069    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:16.280072    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:16.192322    2206 node_ready.go:35] waiting up to 6m0s for node "addons-684000" to be "Ready" ...
	I0917 09:56:16.256493    2206 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 09:56:16.317164    2206 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 09:56:16.258918    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 09:56:16.258944    2206 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 09:56:16.259287    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.269730    2206 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 09:56:16.354263    2206 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 09:56:16.279878    2206 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 09:56:16.354303    2206 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 09:56:16.237788    2206 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 09:56:16.280394    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:16.280397    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:16.316915    2206 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 09:56:16.334360    2206 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 09:56:16.354110    2206 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 09:56:16.354323    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.354371    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.374670    2206 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 09:56:16.391003    2206 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 09:56:16.394702    2206 node_ready.go:49] node "addons-684000" has status "Ready":"True"
	I0917 09:56:16.412091    2206 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 09:56:16.412274    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:16.412296    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:16.449034    2206 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 09:56:16.449350    2206 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 09:56:16.449402    2206 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 09:56:16.449425    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.449437    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 09:56:16.449442    2206 node_ready.go:38] duration metric: took 169.313139ms for node "addons-684000" to be "Ready" ...
	I0917 09:56:16.449684    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.449691    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:16.449719    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:16.449733    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.451742    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:16.451769    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:16.486230    2206 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 09:56:16.486531    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 09:56:16.486559    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.486563    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.486565    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.506998    2206 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 09:56:16.507012    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 09:56:16.486664    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:16.486657    2206 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 09:56:16.507035    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.486993    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.487031    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.487041    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.487044    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.487055    2206 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 09:56:16.544917    2206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 09:56:16.487178    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.544955    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.504348    2206 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 09:56:16.545074    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 09:56:16.506887    2206 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 09:56:16.580954    2206 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 09:56:16.580967    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 09:56:16.545311    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.580982    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.507210    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.508032    2206 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 09:56:16.545373    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.581035    2206 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 09:56:16.545360    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.545392    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.506894    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 09:56:16.581188    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.560047    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 09:56:16.581212    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.581239    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.545554    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.581246    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.581276    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.581304    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.601104    2206 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-684000" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:16.639448    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.639526    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.639527    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.639536    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.639544    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.601858    2206 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 09:56:16.602158    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.621347    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 09:56:16.631921    2206 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 09:56:16.639163    2206 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 09:56:16.639589    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.639582    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 09:56:16.660115    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.639612    2206 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 09:56:16.639744    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.639772    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.660310    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.660339    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.660335    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.696875    2206 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 09:56:16.697322    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.723240    2206 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 09:56:16.734192    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 09:56:16.734333    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.771048    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 09:56:16.771410    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.785290    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 09:56:16.792537    2206 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 09:56:16.792547    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 09:56:16.792559    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.792706    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.792799    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.792880    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.792976    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.806048    2206 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 09:56:16.806060    2206 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 09:56:16.806886    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 09:56:16.828977    2206 out.go:177]   - Using image docker.io/busybox:stable
	I0917 09:56:16.831472    2206 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-684000" context rescaled to 1 replicas
	I0917 09:56:16.837879    2206 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 09:56:16.837890    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 09:56:16.853497    2206 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 09:56:16.853518    2206 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 09:56:16.865661    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 09:56:16.901982    2206 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 09:56:16.901996    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 09:56:16.902009    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:16.902156    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:16.902246    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:16.902378    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:16.902468    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:16.925189    2206 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 09:56:16.925203    2206 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 09:56:16.939027    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 09:56:16.952519    2206 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 09:56:16.952532    2206 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 09:56:16.972291    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 09:56:16.976251    2206 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 09:56:16.976264    2206 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 09:56:16.980937    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 09:56:16.993542    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 09:56:17.039022    2206 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 09:56:17.059843    2206 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 09:56:17.059857    2206 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 09:56:17.059872    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:17.060021    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:17.060141    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:17.060230    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:17.060334    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:17.094099    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 09:56:17.159741    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:17.159755    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:17.159919    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:17.159926    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:17.159932    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:17.159942    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:17.159947    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:17.160062    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:17.160070    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:17.160078    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:17.170774    2206 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 09:56:17.170787    2206 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 09:56:17.171830    2206 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 09:56:17.171850    2206 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 09:56:17.195220    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 09:56:17.245079    2206 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 09:56:17.245092    2206 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 09:56:17.415882    2206 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 09:56:17.415895    2206 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 09:56:17.540197    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 09:56:17.565711    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 09:56:17.626456    2206 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 09:56:17.626468    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 09:56:17.803311    2206 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 09:56:17.803329    2206 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 09:56:17.882860    2206 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 09:56:17.882875    2206 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 09:56:18.000553    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 09:56:18.067448    2206 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 09:56:18.067463    2206 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 09:56:18.088332    2206 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 09:56:18.088345    2206 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 09:56:18.165327    2206 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 09:56:18.165340    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 09:56:18.524631    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 09:56:18.525656    2206 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 09:56:18.525664    2206 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 09:56:18.644830    2206 pod_ready.go:103] pod "etcd-addons-684000" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:18.773972    2206 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 09:56:18.773985    2206 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 09:56:19.170472    2206 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 09:56:19.170487    2206 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 09:56:19.327359    2206 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 09:56:19.327371    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 09:56:19.376999    2206 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 09:56:19.377017    2206 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 09:56:19.423102    2206 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 09:56:19.423114    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 09:56:19.582772    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.943144502s)
	I0917 09:56:19.582800    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:19.582810    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:19.582884    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.98082672s)
	I0917 09:56:19.582905    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:19.582911    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:19.582979    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:19.582988    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:19.582994    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:19.582999    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:19.582997    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:19.583047    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:19.583048    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:19.583055    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:19.583061    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:19.583070    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:19.583148    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:19.583161    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:19.583175    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:19.583242    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:19.583251    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:19.583254    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:19.587807    2206 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 09:56:19.587818    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 09:56:19.620658    2206 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-684000 service yakd-dashboard -n yakd-dashboard
	
	I0917 09:56:19.924442    2206 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 09:56:19.924454    2206 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 09:56:20.231802    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 09:56:20.655043    2206 pod_ready.go:103] pod "etcd-addons-684000" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:21.203072    2206 pod_ready.go:93] pod "etcd-addons-684000" in "kube-system" namespace has status "Ready":"True"
	I0917 09:56:21.203087    2206 pod_ready.go:82] duration metric: took 4.563621505s for pod "etcd-addons-684000" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:21.203095    2206 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-684000" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:21.775744    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.983661816s)
	I0917 09:56:21.775762    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.968811302s)
	W0917 09:56:21.775779    2206 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 09:56:21.775786    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:21.775818    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.910100416s)
	I0917 09:56:21.775820    2206 retry.go:31] will retry after 158.29055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 09:56:21.775855    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:21.775834    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:21.775864    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:21.776013    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:21.776026    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:21.776037    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:21.776042    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:21.776044    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:21.776080    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:21.776088    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:21.776091    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:21.776102    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:21.776110    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:21.776173    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:21.776183    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:21.776217    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:21.776259    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:21.776297    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:21.776304    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:21.935609    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 09:56:23.249112    2206 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 09:56:23.249139    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:23.249277    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:23.249383    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:23.249487    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:23.249583    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:23.255192    2206 pod_ready.go:103] pod "kube-apiserver-addons-684000" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:23.518609    2206 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 09:56:23.573206    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.600838204s)
	I0917 09:56:23.573224    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.579610176s)
	I0917 09:56:23.573234    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:23.573248    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:23.573258    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.479085083s)
	I0917 09:56:23.573239    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:23.573273    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:23.573280    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:23.573289    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:23.573429    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:23.573439    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:23.573446    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:23.573445    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:23.573447    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:23.573457    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:23.573462    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:23.573465    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:23.573485    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:23.573491    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:23.573466    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:23.573471    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:23.573514    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:23.573522    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:23.573495    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:23.573678    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:23.573711    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:23.573718    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:23.573751    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:23.573752    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:23.573828    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:23.573772    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:23.573773    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:23.573835    2206 addons.go:475] Verifying addon ingress=true in "addons-684000"
	I0917 09:56:23.573850    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:23.612529    2206 out.go:177] * Verifying ingress addon...
	I0917 09:56:23.654423    2206 addons.go:234] Setting addon gcp-auth=true in "addons-684000"
	I0917 09:56:23.654452    2206 host.go:66] Checking if "addons-684000" exists ...
	I0917 09:56:23.654729    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:23.654753    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:23.663725    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49759
	I0917 09:56:23.664066    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:23.664416    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:23.664435    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:23.664619    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:23.665006    2206 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:56:23.665035    2206 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 09:56:23.671250    2206 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 09:56:23.673864    2206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49761
	I0917 09:56:23.674199    2206 main.go:141] libmachine: () Calling .GetVersion
	I0917 09:56:23.674509    2206 main.go:141] libmachine: Using API Version  1
	I0917 09:56:23.674520    2206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 09:56:23.674720    2206 main.go:141] libmachine: () Calling .GetMachineName
	I0917 09:56:23.674843    2206 main.go:141] libmachine: (addons-684000) Calling .GetState
	I0917 09:56:23.674916    2206 main.go:141] libmachine: (addons-684000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 09:56:23.675003    2206 main.go:141] libmachine: (addons-684000) DBG | hyperkit pid from json: 2221
	I0917 09:56:23.676094    2206 main.go:141] libmachine: (addons-684000) Calling .DriverName
	I0917 09:56:23.676265    2206 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 09:56:23.676277    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHHostname
	I0917 09:56:23.676357    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHPort
	I0917 09:56:23.676437    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHKeyPath
	I0917 09:56:23.676509    2206 main.go:141] libmachine: (addons-684000) Calling .GetSSHUsername
	I0917 09:56:23.676583    2206 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/addons-684000/id_rsa Username:docker}
	I0917 09:56:23.684986    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:23.684996    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:23.685131    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:23.685138    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:23.686453    2206 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 09:56:23.686466    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:24.174301    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:24.693954    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:25.230902    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:25.675711    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:25.729505    2206 pod_ready.go:103] pod "kube-apiserver-addons-684000" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:25.893376    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.698058137s)
	I0917 09:56:25.893402    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.353114172s)
	I0917 09:56:25.893419    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893404    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893439    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893454    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893466    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.327661991s)
	I0917 09:56:25.893485    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893496    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893555    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.89290081s)
	I0917 09:56:25.893584    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893596    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893667    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.368931823s)
	I0917 09:56:25.893698    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893712    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893738    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.893738    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.893746    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.893772    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:25.893776    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.893784    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893789    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.893783    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.893798    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893790    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:25.893802    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893781    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.893833    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893800    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.893841    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893805    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893894    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893800    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:25.893908    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.893924    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.893933    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.893939    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:25.893945    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:25.894090    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:25.894124    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:25.894143    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.894150    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.894156    2206 addons.go:475] Verifying addon metrics-server=true in "addons-684000"
	I0917 09:56:25.894243    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.894251    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.894264    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.894271    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.894348    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:25.894377    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.894389    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.894396    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:25.894398    2206 addons.go:475] Verifying addon registry=true in "addons-684000"
	I0917 09:56:25.894425    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:25.894431    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:25.925031    2206 out.go:177] * Verifying registry addon...
	I0917 09:56:25.999526    2206 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 09:56:26.022405    2206 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 09:56:26.022418    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:26.048734    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:26.048751    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:26.048966    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:26.048967    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:26.048973    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:26.198780    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:26.264073    2206 pod_ready.go:93] pod "kube-apiserver-addons-684000" in "kube-system" namespace has status "Ready":"True"
	I0917 09:56:26.264086    2206 pod_ready.go:82] duration metric: took 5.060942124s for pod "kube-apiserver-addons-684000" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:26.264094    2206 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-684000" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:26.512883    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:26.522408    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.29051851s)
	I0917 09:56:26.522420    2206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.586744088s)
	I0917 09:56:26.522435    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:26.522446    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:26.522457    2206 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.846156758s)
	I0917 09:56:26.522456    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:26.522488    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:26.522709    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:26.522717    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:26.522740    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:26.522736    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:26.522755    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:26.522760    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:26.522769    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:26.522789    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:26.522772    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:26.522804    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:26.522928    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:26.522937    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:26.522948    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:26.523068    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:26.523097    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:26.523106    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:26.523113    2206 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-684000"
	I0917 09:56:26.561961    2206 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 09:56:26.602922    2206 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 09:56:26.660709    2206 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 09:56:26.661340    2206 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 09:56:26.697875    2206 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 09:56:26.697890    2206 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 09:56:26.703696    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:26.703765    2206 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 09:56:26.703774    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:26.741262    2206 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 09:56:26.741274    2206 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 09:56:26.790370    2206 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 09:56:26.790383    2206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 09:56:26.805575    2206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 09:56:27.002682    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:27.166730    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:27.174579    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:27.470547    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:27.470560    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:27.470735    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:27.470750    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:27.470757    2206 main.go:141] libmachine: Making call to close driver server
	I0917 09:56:27.470761    2206 main.go:141] libmachine: (addons-684000) Calling .Close
	I0917 09:56:27.470764    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:27.470881    2206 main.go:141] libmachine: Successfully made call to close driver server
	I0917 09:56:27.470885    2206 main.go:141] libmachine: (addons-684000) DBG | Closing plugin on server side
	I0917 09:56:27.470893    2206 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 09:56:27.471755    2206 addons.go:475] Verifying addon gcp-auth=true in "addons-684000"
	I0917 09:56:27.512600    2206 out.go:177] * Verifying gcp-auth addon...
	I0917 09:56:27.570561    2206 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 09:56:27.574316    2206 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 09:56:27.574817    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:27.675678    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:27.675749    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:28.002434    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:28.166226    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:28.173889    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:28.268665    2206 pod_ready.go:103] pod "kube-controller-manager-addons-684000" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:28.502112    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:28.665516    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:28.673738    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:29.002654    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:29.165758    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:29.173722    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:29.502042    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:29.665950    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:29.673582    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:30.001685    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:30.175478    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:30.175554    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:30.504957    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:30.665893    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:30.674557    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:30.767906    2206 pod_ready.go:103] pod "kube-controller-manager-addons-684000" in "kube-system" namespace has status "Ready":"False"
	I0917 09:56:31.002666    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:31.165573    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:31.174126    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:31.502791    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:31.665992    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:31.674295    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:31.768443    2206 pod_ready.go:93] pod "kube-controller-manager-addons-684000" in "kube-system" namespace has status "Ready":"True"
	I0917 09:56:31.768456    2206 pod_ready.go:82] duration metric: took 5.504308774s for pod "kube-controller-manager-addons-684000" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:31.768462    2206 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-684000" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:31.773903    2206 pod_ready.go:93] pod "kube-scheduler-addons-684000" in "kube-system" namespace has status "Ready":"True"
	I0917 09:56:31.773915    2206 pod_ready.go:82] duration metric: took 5.448549ms for pod "kube-scheduler-addons-684000" in "kube-system" namespace to be "Ready" ...
	I0917 09:56:31.773921    2206 pod_ready.go:39] duration metric: took 15.266749992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 09:56:31.773944    2206 api_server.go:52] waiting for apiserver process to appear ...
	I0917 09:56:31.774003    2206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 09:56:31.794421    2206 api_server.go:72] duration metric: took 16.065409264s to wait for apiserver process to appear ...
	I0917 09:56:31.794433    2206 api_server.go:88] waiting for apiserver healthz status ...
	I0917 09:56:31.794455    2206 api_server.go:253] Checking apiserver healthz at https://192.169.0.2:8443/healthz ...
	I0917 09:56:31.802546    2206 api_server.go:279] https://192.169.0.2:8443/healthz returned 200:
	ok
	I0917 09:56:31.803197    2206 api_server.go:141] control plane version: v1.31.1
	I0917 09:56:31.803208    2206 api_server.go:131] duration metric: took 8.770797ms to wait for apiserver health ...
	I0917 09:56:31.803213    2206 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 09:56:31.814788    2206 system_pods.go:59] 18 kube-system pods found
	I0917 09:56:31.814815    2206 system_pods.go:61] "coredns-7c65d6cfc9-srhsw" [557ed7e3-5964-47e4-85c6-7f2a8c23d88f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 09:56:31.814824    2206 system_pods.go:61] "csi-hostpath-attacher-0" [0acc20ab-0d48-4a07-b29c-45882e0ca27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 09:56:31.814833    2206 system_pods.go:61] "csi-hostpath-resizer-0" [4378db5a-9b6d-4dad-a720-737f86aac5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 09:56:31.814839    2206 system_pods.go:61] "csi-hostpathplugin-277bf" [c7a8e8a8-49c0-4ed5-9344-9614b390a40f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 09:56:31.814846    2206 system_pods.go:61] "etcd-addons-684000" [b6718fd8-34cc-4cdb-83a7-e586d993f631] Running
	I0917 09:56:31.814849    2206 system_pods.go:61] "kube-apiserver-addons-684000" [b5cc083c-9659-4ecb-8c61-7f376e3e5996] Running
	I0917 09:56:31.814851    2206 system_pods.go:61] "kube-controller-manager-addons-684000" [a1288fcf-66d2-41dc-97e4-459e0021751e] Running
	I0917 09:56:31.814855    2206 system_pods.go:61] "kube-ingress-dns-minikube" [58a4aa33-b781-4f66-8174-8e2331f3b76d] Running
	I0917 09:56:31.814857    2206 system_pods.go:61] "kube-proxy-5vq5k" [41421d11-41d1-4f63-968d-56fd2e65c1dc] Running
	I0917 09:56:31.814860    2206 system_pods.go:61] "kube-scheduler-addons-684000" [b266d85c-2fe7-4b13-aa1f-1183c3c121dd] Running
	I0917 09:56:31.814864    2206 system_pods.go:61] "metrics-server-84c5f94fbc-zg4lj" [02fe2d85-c7d2-493d-a247-c14e47795708] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 09:56:31.814871    2206 system_pods.go:61] "nvidia-device-plugin-daemonset-5kvkx" [ce22ef59-d12f-4358-a2a7-36598797d86a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 09:56:31.814875    2206 system_pods.go:61] "registry-66c9cd494c-jzjxg" [d41ea116-b4f8-4b3d-ae06-2e78540cb794] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 09:56:31.814881    2206 system_pods.go:61] "registry-proxy-nwrjv" [3f55bb41-a07d-4deb-a7a0-0034c2e839d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 09:56:31.814885    2206 system_pods.go:61] "snapshot-controller-56fcc65765-cxjlv" [7e62331d-3285-461d-8235-297051132692] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 09:56:31.814889    2206 system_pods.go:61] "snapshot-controller-56fcc65765-r6zgh" [eefa75ab-5999-446d-a57a-4bbb02705749] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 09:56:31.814892    2206 system_pods.go:61] "storage-provisioner" [91e955c6-50ff-4ade-aeb4-04c8e399894a] Running
	I0917 09:56:31.814896    2206 system_pods.go:61] "tiller-deploy-b48cc5f79-kmbwj" [98b0df5c-c437-4436-9bd8-2b7954038de5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 09:56:31.814901    2206 system_pods.go:74] duration metric: took 11.682862ms to wait for pod list to return data ...
	I0917 09:56:31.814909    2206 default_sa.go:34] waiting for default service account to be created ...
	I0917 09:56:31.817121    2206 default_sa.go:45] found service account: "default"
	I0917 09:56:31.817130    2206 default_sa.go:55] duration metric: took 2.216763ms for default service account to be created ...
	I0917 09:56:31.817135    2206 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 09:56:31.824707    2206 system_pods.go:86] 18 kube-system pods found
	I0917 09:56:31.824733    2206 system_pods.go:89] "coredns-7c65d6cfc9-srhsw" [557ed7e3-5964-47e4-85c6-7f2a8c23d88f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 09:56:31.824740    2206 system_pods.go:89] "csi-hostpath-attacher-0" [0acc20ab-0d48-4a07-b29c-45882e0ca27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 09:56:31.824745    2206 system_pods.go:89] "csi-hostpath-resizer-0" [4378db5a-9b6d-4dad-a720-737f86aac5aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 09:56:31.824749    2206 system_pods.go:89] "csi-hostpathplugin-277bf" [c7a8e8a8-49c0-4ed5-9344-9614b390a40f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 09:56:31.824754    2206 system_pods.go:89] "etcd-addons-684000" [b6718fd8-34cc-4cdb-83a7-e586d993f631] Running
	I0917 09:56:31.824758    2206 system_pods.go:89] "kube-apiserver-addons-684000" [b5cc083c-9659-4ecb-8c61-7f376e3e5996] Running
	I0917 09:56:31.824761    2206 system_pods.go:89] "kube-controller-manager-addons-684000" [a1288fcf-66d2-41dc-97e4-459e0021751e] Running
	I0917 09:56:31.824764    2206 system_pods.go:89] "kube-ingress-dns-minikube" [58a4aa33-b781-4f66-8174-8e2331f3b76d] Running
	I0917 09:56:31.824768    2206 system_pods.go:89] "kube-proxy-5vq5k" [41421d11-41d1-4f63-968d-56fd2e65c1dc] Running
	I0917 09:56:31.824770    2206 system_pods.go:89] "kube-scheduler-addons-684000" [b266d85c-2fe7-4b13-aa1f-1183c3c121dd] Running
	I0917 09:56:31.824775    2206 system_pods.go:89] "metrics-server-84c5f94fbc-zg4lj" [02fe2d85-c7d2-493d-a247-c14e47795708] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 09:56:31.824779    2206 system_pods.go:89] "nvidia-device-plugin-daemonset-5kvkx" [ce22ef59-d12f-4358-a2a7-36598797d86a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 09:56:31.824784    2206 system_pods.go:89] "registry-66c9cd494c-jzjxg" [d41ea116-b4f8-4b3d-ae06-2e78540cb794] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 09:56:31.824788    2206 system_pods.go:89] "registry-proxy-nwrjv" [3f55bb41-a07d-4deb-a7a0-0034c2e839d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 09:56:31.824793    2206 system_pods.go:89] "snapshot-controller-56fcc65765-cxjlv" [7e62331d-3285-461d-8235-297051132692] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 09:56:31.824797    2206 system_pods.go:89] "snapshot-controller-56fcc65765-r6zgh" [eefa75ab-5999-446d-a57a-4bbb02705749] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 09:56:31.824800    2206 system_pods.go:89] "storage-provisioner" [91e955c6-50ff-4ade-aeb4-04c8e399894a] Running
	I0917 09:56:31.824804    2206 system_pods.go:89] "tiller-deploy-b48cc5f79-kmbwj" [98b0df5c-c437-4436-9bd8-2b7954038de5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 09:56:31.824809    2206 system_pods.go:126] duration metric: took 7.670441ms to wait for k8s-apps to be running ...
	I0917 09:56:31.824819    2206 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 09:56:31.824876    2206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:56:31.836797    2206 system_svc.go:56] duration metric: took 11.975813ms WaitForService to wait for kubelet
	I0917 09:56:31.836811    2206 kubeadm.go:582] duration metric: took 16.107802029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 09:56:31.836826    2206 node_conditions.go:102] verifying NodePressure condition ...
	I0917 09:56:31.838907    2206 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 09:56:31.838923    2206 node_conditions.go:123] node cpu capacity is 2
	I0917 09:56:31.838944    2206 node_conditions.go:105] duration metric: took 2.112523ms to run NodePressure ...
	I0917 09:56:31.838957    2206 start.go:241] waiting for startup goroutines ...
	I0917 09:56:32.003368    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:32.166285    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:32.175215    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:32.502808    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:32.666231    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:32.674176    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:33.003282    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:33.165759    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:33.173963    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:33.502156    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:33.665636    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:33.673838    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:34.001921    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:34.165952    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:34.174382    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:34.502631    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:34.666361    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:34.674283    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:35.003262    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:35.167362    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:35.173785    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:35.503074    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:35.665837    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:35.676746    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:36.003304    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:36.166261    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:36.174751    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:36.503200    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:36.665865    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:36.673870    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:37.003416    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:37.165849    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:37.174229    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:37.579695    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:37.665797    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:37.674079    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:38.002483    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:38.165809    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:38.174082    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:38.501801    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:38.665859    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:38.674388    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:39.002349    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:39.165523    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:39.173980    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:39.502668    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:39.666236    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:39.673695    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:40.002099    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:40.224992    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:40.225049    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:40.502037    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:40.665673    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:40.674265    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:41.001978    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:41.165457    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:41.173610    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:41.502191    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:41.666226    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:41.674619    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:42.003718    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:42.167797    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:42.173745    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:42.505056    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:42.666458    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:42.675233    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:43.003682    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:43.165587    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:43.175110    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:43.502191    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:43.668341    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:43.674126    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:44.002293    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:44.165904    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:44.174921    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:44.501828    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:44.665566    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:44.673456    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:45.002253    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:45.165686    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:45.174186    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:45.504006    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:45.666768    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:45.673632    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:46.002290    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:46.165666    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:46.174926    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:46.506432    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:46.668427    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:46.673638    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:47.003690    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:47.167301    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:47.174323    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:47.504616    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:47.665542    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:47.674861    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:48.003390    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:48.166210    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:48.174778    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:48.502250    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:48.665729    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:48.674707    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:49.003221    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:49.165848    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:49.175341    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:49.504205    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:49.667872    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:49.706979    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:50.005165    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:50.166838    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:50.176178    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:50.505851    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:50.665912    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:50.674305    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:51.002254    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:51.165822    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:51.174221    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:51.503566    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:51.668012    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:51.675180    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:52.005922    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:52.166720    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:52.175544    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:52.501831    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:52.666113    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:52.674659    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:53.003344    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:53.205810    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:53.205900    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:53.501948    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:53.666318    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:53.674479    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:54.002564    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:54.166830    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:54.175028    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:54.502484    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:54.666240    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:54.673809    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:55.002254    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:55.164240    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:55.174713    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:55.502329    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:55.664967    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:55.674158    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:56.002208    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:56.164355    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:56.173877    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:56.503937    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:56.667390    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:56.674030    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:57.003157    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:57.167379    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:57.175463    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:57.503420    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:57.665482    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:57.675590    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:58.002533    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:58.165085    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:58.174678    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:58.503013    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:58.665969    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:58.673884    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:59.004071    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:59.165151    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:59.174449    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:56:59.503175    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:56:59.664888    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:56:59.674358    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:00.003949    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:00.165227    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:00.173871    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:00.583710    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:00.683254    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:00.683310    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:01.006866    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:01.165455    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:01.175435    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:01.503020    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:01.666621    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:01.675625    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:02.003856    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:02.165964    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:02.174526    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:02.502367    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:02.666034    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:02.674021    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:03.002297    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:03.166318    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:03.174411    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:03.502025    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:03.667788    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:03.677135    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:04.003149    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:04.169133    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:04.173675    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:04.502222    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:04.669670    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:04.679194    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:05.002810    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:05.165843    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:05.175289    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:05.502271    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:05.665643    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:05.675137    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:06.002929    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:06.166338    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:06.174765    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:06.503997    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:06.665759    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:06.675329    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:07.002357    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:07.165795    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:07.175100    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:07.502888    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:07.665206    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:07.674891    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:08.002491    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:08.165924    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:08.174165    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:08.566168    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:08.665222    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:08.676141    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:09.004359    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 09:57:09.166845    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:09.175058    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:09.502438    2206 kapi.go:107] duration metric: took 43.502531363s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 09:57:09.665820    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:09.675205    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:10.183818    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:10.184065    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:10.665236    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:10.674013    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:11.177555    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:11.177621    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:11.667124    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:11.674934    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:12.175842    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:12.176863    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:12.665691    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:12.675091    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:13.165463    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:13.174356    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:13.666133    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:13.673903    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:14.166176    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:14.174561    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:14.665638    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:14.675493    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:15.165983    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:15.181502    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:15.668798    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:15.674350    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:16.167071    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:16.174749    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:16.665664    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:16.675687    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:17.165125    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:17.176280    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:17.667998    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:17.676474    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:18.165852    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:18.176088    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:18.666285    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:18.675302    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:19.166095    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:19.174596    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:19.665433    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:19.676226    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:20.177823    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:20.178081    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:20.666703    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:20.676102    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:21.165456    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:21.175433    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:21.666278    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:21.675195    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:22.165987    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:22.174323    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:22.666087    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:22.679267    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:23.164713    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:23.173978    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:23.665222    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:23.676608    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:24.164608    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:24.174498    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:24.664934    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:24.674713    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:25.165100    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:25.174171    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:25.664962    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:25.673874    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:26.165369    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:26.175270    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:26.665241    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:26.674491    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:27.164997    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:27.174637    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:27.665067    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:27.673932    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:28.175088    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:28.175142    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:28.665245    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:28.674885    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:29.165763    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:29.176994    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:29.666039    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:29.674647    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:30.174670    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:30.174842    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:30.667601    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:30.673971    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:31.165754    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:31.174010    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:31.664734    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:31.675349    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:32.164866    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:32.174634    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:32.665536    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:32.674357    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:33.165374    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:33.175753    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:33.665649    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:33.674933    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:34.164866    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:34.174509    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:34.665344    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:34.674585    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:35.164971    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:35.173949    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:35.664965    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:35.675545    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:36.165355    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:36.174615    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:36.665191    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:36.674264    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:37.165405    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:37.175292    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:37.665067    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:37.673919    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:38.164796    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:38.176104    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:38.674754    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:38.678478    2206 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 09:57:39.165664    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:39.174417    2206 kapi.go:107] duration metric: took 1m15.5025107s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 09:57:39.665157    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:40.176268    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:40.665765    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:41.166276    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:41.665649    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:42.165348    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:42.665352    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:43.166609    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:43.664973    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:44.166402    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:44.665435    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:45.165754    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:45.668106    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:46.165322    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:46.666350    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:47.166722    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:47.666699    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:48.165229    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:48.675391    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 09:57:49.168866    2206 kapi.go:107] duration metric: took 1m22.506803605s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 09:57:49.575346    2206 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 09:57:49.575357    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:50.074877    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:50.576326    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:51.073551    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:51.577257    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:52.075680    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:52.574889    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:53.074868    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:53.577905    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:54.075054    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:54.574240    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:55.075914    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:55.574913    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:56.073412    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:56.577145    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:57.073682    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:57.575228    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:58.076204    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:58.575164    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:59.074984    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:57:59.576124    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:00.075369    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:00.576615    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:01.074321    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:01.575883    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:02.074626    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:02.576077    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:03.075352    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:03.574827    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:04.075740    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:04.576577    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:05.075980    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:05.577320    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:06.075984    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:06.575700    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:07.077081    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:07.575078    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:08.075797    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:08.576245    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:09.076986    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:09.576482    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:10.076162    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:10.577308    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:11.076737    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:11.575257    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:12.075429    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:12.575902    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:13.074569    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:13.576296    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:14.075977    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:14.575377    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:15.076829    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:15.575657    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:16.077186    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:16.576474    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:17.076796    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:17.578561    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:18.075442    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:18.576134    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:19.075077    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:19.575212    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:20.073363    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:20.576755    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:21.074961    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:21.578461    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:22.075523    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:22.575642    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:23.074729    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:23.577267    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:24.074964    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:24.575929    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:25.076261    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:25.574818    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:26.074953    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:26.576459    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:27.075682    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:27.575304    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:28.078290    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:28.575843    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:29.075555    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:29.575200    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:30.077079    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:30.578022    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:31.078145    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:31.574134    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:32.074097    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:32.575422    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:33.075851    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:33.575139    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:34.074709    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:34.575281    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:35.076273    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:35.574718    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:36.076451    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:36.575306    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:37.075060    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:37.574824    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:38.075355    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:38.577129    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:39.076701    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:39.576123    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:40.076619    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:40.576406    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:41.076947    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:41.575931    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:42.075539    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:42.576615    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:43.075003    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:43.575113    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:44.075491    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:44.576226    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:45.074619    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:45.576203    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:46.075427    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:46.575520    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:47.075974    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:47.575628    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:48.075899    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:48.575072    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:49.075687    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:49.575365    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:50.074422    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:50.576510    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:51.075659    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:51.575770    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:52.075552    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:52.574686    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:53.075142    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:53.575206    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:54.077476    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:54.575348    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:55.074291    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:55.574737    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:56.074553    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:56.576698    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:57.075241    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:57.574745    2206 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 09:58:58.076798    2206 kapi.go:107] duration metric: took 2m30.504921252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 09:58:58.107484    2206 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-684000 cluster.
	I0917 09:58:58.165917    2206 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 09:58:58.188103    2206 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 09:58:58.209341    2206 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, yakd, storage-provisioner, nvidia-device-plugin, helm-tiller, default-storageclass, metrics-server, volcano, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0917 09:58:58.230838    2206 addons.go:510] duration metric: took 2m42.500548486s for enable addons: enabled=[ingress-dns cloud-spanner yakd storage-provisioner nvidia-device-plugin helm-tiller default-storageclass metrics-server volcano inspektor-gadget storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0917 09:58:58.230881    2206 start.go:246] waiting for cluster config update ...
	I0917 09:58:58.230912    2206 start.go:255] writing updated cluster config ...
	I0917 09:58:58.252167    2206 ssh_runner.go:195] Run: rm -f paused
	I0917 09:58:58.295186    2206 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0917 09:58:58.316318    2206 out.go:201] 
	W0917 09:58:58.337852    2206 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0917 09:58:58.359175    2206 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0917 09:58:58.401028    2206 out.go:177] * Done! kubectl is now configured to use "addons-684000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.179063134Z" level=info msg="shim disconnected" id=1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896 namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.179210220Z" level=warning msg="cleaning up after shim disconnected" id=1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896 namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.179327140Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1269]: time="2024-09-17T17:08:52.179501626Z" level=info msg="ignoring event" container=1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:52 addons-684000 dockerd[1269]: time="2024-09-17T17:08:52.309983806Z" level=info msg="ignoring event" container=ee89f4b94e9b9bce62b543c65b04721d6eb3517ac6b714d8928642a4f326ac5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.310484593Z" level=info msg="shim disconnected" id=ee89f4b94e9b9bce62b543c65b04721d6eb3517ac6b714d8928642a4f326ac5b namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.310571496Z" level=warning msg="cleaning up after shim disconnected" id=ee89f4b94e9b9bce62b543c65b04721d6eb3517ac6b714d8928642a4f326ac5b namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.310580416Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1269]: time="2024-09-17T17:08:52.396324658Z" level=info msg="ignoring event" container=c7e72d5aa0b6866371d472ecb933f87e62e8621f35f7561b29bd3d34ab906077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.396870616Z" level=info msg="shim disconnected" id=c7e72d5aa0b6866371d472ecb933f87e62e8621f35f7561b29bd3d34ab906077 namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.396967085Z" level=warning msg="cleaning up after shim disconnected" id=c7e72d5aa0b6866371d472ecb933f87e62e8621f35f7561b29bd3d34ab906077 namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.396999546Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.885671140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.885707509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.885715461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:08:52 addons-684000 dockerd[1275]: time="2024-09-17T17:08:52.885905187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:08:53 addons-684000 cri-dockerd[1167]: time="2024-09-17T17:08:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b125c9844d3cfa24e99133a87cccd09a0210757769903071321cc6b159d35afa/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 17:08:53 addons-684000 dockerd[1275]: time="2024-09-17T17:08:53.197486192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:08:53 addons-684000 dockerd[1275]: time="2024-09-17T17:08:53.197596039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:08:53 addons-684000 dockerd[1275]: time="2024-09-17T17:08:53.197610062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:08:53 addons-684000 dockerd[1275]: time="2024-09-17T17:08:53.198364523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:08:53 addons-684000 dockerd[1269]: time="2024-09-17T17:08:53.270918539Z" level=info msg="ignoring event" container=bfba1b3253951484a5f5ea5a0039836d8b184f2592f3a4b41a9e1438edfd84b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:08:53 addons-684000 dockerd[1275]: time="2024-09-17T17:08:53.270913439Z" level=info msg="shim disconnected" id=bfba1b3253951484a5f5ea5a0039836d8b184f2592f3a4b41a9e1438edfd84b1 namespace=moby
	Sep 17 17:08:53 addons-684000 dockerd[1275]: time="2024-09-17T17:08:53.271409699Z" level=warning msg="cleaning up after shim disconnected" id=bfba1b3253951484a5f5ea5a0039836d8b184f2592f3a4b41a9e1438edfd84b1 namespace=moby
	Sep 17 17:08:53 addons-684000 dockerd[1275]: time="2024-09-17T17:08:53.271446921Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	bfba1b3253951       a416a98b71e22                                                                                                                Less than a second ago   Exited              helper-pod                0                   b125c9844d3cf       helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1
	95f3a22f2b67b       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              4 seconds ago            Exited              busybox                   0                   91b155ed4ad32       test-local-path
	b1c501f75073a       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              10 seconds ago           Exited              helper-pod                0                   324e0249fc0d4       helper-pod-create-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1
	9375b076c869e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            14 seconds ago           Exited              gadget                    7                   175946315a3f2       gadget-9grhj
	0bdf533979bfe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago            Running             gcp-auth                  0                   ef7cf77c97bef       gcp-auth-89d5ffd79-kl445
	56020bede8613       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago           Running             controller                0                   2b67a5634b68a       ingress-nginx-controller-bc57996ff-j2grk
	5e790b8a18706       ce263a8653f9c                                                                                                                11 minutes ago           Exited              patch                     1                   f02f162f51821       ingress-nginx-admission-patch-x9d4r
	80bed237a8cf4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago           Exited              create                    0                   01fb545a5ade6       ingress-nginx-admission-create-lk6lc
	1ac99c16fe17d       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        11 minutes ago           Running             metrics-server            0                   cf606022d3b71       metrics-server-84c5f94fbc-zg4lj
	2fbc7d352d187       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago           Running             local-path-provisioner    0                   e419afef23bf6       local-path-provisioner-86d989889c-sz5wq
	061aaf684dfe5       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  12 minutes ago           Running             tiller                    0                   d0427b7ba55c7       tiller-deploy-b48cc5f79-kmbwj
	bbfed7ee758fe       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago           Running             cloud-spanner-emulator    0                   cb1cf7c2d88f3       cloud-spanner-emulator-769b77f747-p5j7h
	047c735c843ba       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago           Running             minikube-ingress-dns      0                   11a4abff8f215       kube-ingress-dns-minikube
	4ae7c3774cc96       6e38f40d628db                                                                                                                12 minutes ago           Running             storage-provisioner       0                   e1713c34a27c3       storage-provisioner
	a45f11d69f01d       c69fa2e9cbf5f                                                                                                                12 minutes ago           Running             coredns                   0                   b089d2c9add14       coredns-7c65d6cfc9-srhsw
	c62a641e6ed7c       60c005f310ff3                                                                                                                12 minutes ago           Running             kube-proxy                0                   e0b8d3c277a10       kube-proxy-5vq5k
	47af1958055ef       9aa1fad941575                                                                                                                12 minutes ago           Running             kube-scheduler            0                   b813ab08b11a2       kube-scheduler-addons-684000
	1cb6489a40144       175ffd71cce3d                                                                                                                12 minutes ago           Running             kube-controller-manager   0                   7f80d23ca3246       kube-controller-manager-addons-684000
	775f015752c63       2e96e5913fc06                                                                                                                12 minutes ago           Running             etcd                      0                   a164ed158a4db       etcd-addons-684000
	a7a6f63703324       6bab7719df100                                                                                                                12 minutes ago           Running             kube-apiserver            0                   a9ee9c537520b       kube-apiserver-addons-684000
	
	
	==> controller_ingress [56020bede861] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0917 16:57:38.615859       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0917 16:57:38.616079       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0917 16:57:38.620024       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0917 16:57:38.769412       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0917 16:57:38.794912       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0917 16:57:38.817028       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0917 16:57:38.857616       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"93c0513a-70e3-4cc5-9741-a806ac805966", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0917 16:57:38.869496       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"db4da11b-707e-4904-8a19-38ea9ca802cb", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0917 16:57:38.870941       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"4061c6f8-8593-4993-9f8a-3cb406c6d487", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0917 16:57:40.019342       7 nginx.go:317] "Starting NGINX process"
	I0917 16:57:40.019575       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0917 16:57:40.019717       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0917 16:57:40.020011       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 16:57:40.042053       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0917 16:57:40.042448       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-j2grk"
	I0917 16:57:40.055908       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-j2grk" node="addons-684000"
	I0917 16:57:40.081805       7 controller.go:213] "Backend successfully reloaded"
	I0917 16:57:40.081887       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0917 16:57:40.082080       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-j2grk", UID:"dc0a5a33-e6f6-4fbd-9ac1-67c792fc7ffd", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [a45f11d69f01] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.10:36182 - 4867 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000181071s
	[INFO] 10.244.0.10:36182 - 513 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00008703s
	[INFO] 10.244.0.10:36530 - 45148 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082619s
	[INFO] 10.244.0.10:36530 - 23390 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070812s
	[INFO] 10.244.0.10:55193 - 15531 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065197s
	[INFO] 10.244.0.10:55193 - 28333 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048824s
	[INFO] 10.244.0.10:48727 - 15710 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130626s
	[INFO] 10.244.0.10:48727 - 4441 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000319683s
	[INFO] 10.244.0.10:58828 - 60326 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000044286s
	[INFO] 10.244.0.10:58828 - 49316 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000035862s
	[INFO] 10.244.0.10:35778 - 6604 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036098s
	[INFO] 10.244.0.10:35778 - 34251 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081221s
	[INFO] 10.244.0.10:44326 - 4759 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036643s
	[INFO] 10.244.0.10:44326 - 47253 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030422s
	[INFO] 10.244.0.10:59086 - 12720 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000155093s
	[INFO] 10.244.0.10:59086 - 61874 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115749s
	[INFO] 10.244.0.25:57962 - 24685 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363367s
	[INFO] 10.244.0.25:52165 - 58286 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000932874s
	[INFO] 10.244.0.25:34828 - 24289 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015583s
	[INFO] 10.244.0.25:48289 - 1704 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142592s
	[INFO] 10.244.0.25:39981 - 42589 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000062912s
	[INFO] 10.244.0.25:34311 - 24186 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117388s
	[INFO] 10.244.0.25:55410 - 5132 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001630678s
	[INFO] 10.244.0.25:42550 - 18411 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.01551706s
	
	
	==> describe nodes <==
	Name:               addons-684000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-684000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-684000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T09_56_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-684000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:56:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-684000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:08:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:04:51 +0000   Tue, 17 Sep 2024 16:56:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:04:51 +0000   Tue, 17 Sep 2024 16:56:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:04:51 +0000   Tue, 17 Sep 2024 16:56:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:04:51 +0000   Tue, 17 Sep 2024 16:56:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.2
	  Hostname:    addons-684000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	System Info:
	  Machine ID:                 d75cc89de9434542b9fb97a0961f010e
	  System UUID:                603c4179-0000-0000-b729-ba221f45cdb7
	  Boot ID:                    098da939-3c50-425d-99c2-ec0fd4aa2d65
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-p5j7h                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-9grhj                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-kl445                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-j2grk                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-srhsw                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-684000                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-684000                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-684000                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5vq5k                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-684000                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-zg4lj                               100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 tiller-deploy-b48cc5f79-kmbwj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-86d989889c-sz5wq                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-684000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-684000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-684000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node addons-684000 event: Registered Node addons-684000 in Controller
	  Normal  NodeReady                12m   kubelet          Node addons-684000 status is now: NodeReady
	
	
	==> dmesg <==
	[ +14.752838] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.853242] kauditd_printk_skb: 13 callbacks suppressed
	[Sep17 16:57] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.460692] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.127670] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.548864] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.654641] kauditd_printk_skb: 75 callbacks suppressed
	[  +7.274426] kauditd_printk_skb: 16 callbacks suppressed
	[ +11.571986] kauditd_printk_skb: 38 callbacks suppressed
	[Sep17 16:58] kauditd_printk_skb: 28 callbacks suppressed
	[ +22.865404] kauditd_printk_skb: 40 callbacks suppressed
	[Sep17 16:59] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.050793] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.104862] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.915508] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.990443] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 17:03] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:07] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:08] kauditd_printk_skb: 19 callbacks suppressed
	[ +15.697363] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.898215] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.624044] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.584176] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.351844] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.092793] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [775f015752c6] <==
	{"level":"warn","ts":"2024-09-17T16:56:16.101441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.527946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-17T16:56:16.101457Z","caller":"traceutil/trace.go:171","msg":"trace[1751451906] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:329; }","duration":"100.550848ms","start":"2024-09-17T16:56:16.000902Z","end":"2024-09-17T16:56:16.101452Z","steps":["trace[1751451906] 'agreement among raft nodes before linearized reading'  (duration: 100.396742ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:56:16.101547Z","caller":"traceutil/trace.go:171","msg":"trace[35304618] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"107.215362ms","start":"2024-09-17T16:56:15.994327Z","end":"2024-09-17T16:56:16.101542Z","steps":["trace[35304618] 'process raft request'  (duration: 106.824054ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:56:16.101617Z","caller":"traceutil/trace.go:171","msg":"trace[569105249] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"100.66124ms","start":"2024-09-17T16:56:16.000953Z","end":"2024-09-17T16:56:16.101614Z","steps":["trace[569105249] 'process raft request'  (duration: 100.214908ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:56:26.032942Z","caller":"traceutil/trace.go:171","msg":"trace[476405173] linearizableReadLoop","detail":"{readStateIndex:868; appliedIndex:867; }","duration":"101.40887ms","start":"2024-09-17T16:56:25.931522Z","end":"2024-09-17T16:56:26.032931Z","steps":["trace[476405173] 'read index received'  (duration: 27.282898ms)","trace[476405173] 'applied index is now lower than readState.Index'  (duration: 74.125633ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:56:26.033103Z","caller":"traceutil/trace.go:171","msg":"trace[87632403] transaction","detail":"{read_only:false; response_revision:852; number_of_response:1; }","duration":"111.450198ms","start":"2024-09-17T16:56:25.921648Z","end":"2024-09-17T16:56:26.033098Z","steps":["trace[87632403] 'process raft request'  (duration: 37.153326ms)","trace[87632403] 'compare'  (duration: 74.051502ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T16:56:26.033205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.674556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:56:26.033222Z","caller":"traceutil/trace.go:171","msg":"trace[1053027706] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:852; }","duration":"101.698301ms","start":"2024-09-17T16:56:25.931519Z","end":"2024-09-17T16:56:26.033217Z","steps":["trace[1053027706] 'agreement among raft nodes before linearized reading'  (duration: 101.653731ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:56:26.730991Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.345378ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8767824582178080553 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/csi-hostpath-resizer-0\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/csi-hostpath-resizer-0\" value_size:2509 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-17T16:56:26.731238Z","caller":"traceutil/trace.go:171","msg":"trace[1663232663] transaction","detail":"{read_only:false; response_revision:902; number_of_response:1; }","duration":"136.590591ms","start":"2024-09-17T16:56:26.594640Z","end":"2024-09-17T16:56:26.731231Z","steps":["trace[1663232663] 'process raft request'  (duration: 136.561331ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:56:26.731314Z","caller":"traceutil/trace.go:171","msg":"trace[1330719039] transaction","detail":"{read_only:false; response_revision:901; number_of_response:1; }","duration":"174.523658ms","start":"2024-09-17T16:56:26.556783Z","end":"2024-09-17T16:56:26.731307Z","steps":["trace[1330719039] 'process raft request'  (duration: 58.832257ms)","trace[1330719039] 'compare'  (duration: 115.253421ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:56:26.731332Z","caller":"traceutil/trace.go:171","msg":"trace[970886520] linearizableReadLoop","detail":"{readStateIndex:917; appliedIndex:916; }","duration":"164.381389ms","start":"2024-09-17T16:56:26.566948Z","end":"2024-09-17T16:56:26.731329Z","steps":["trace[970886520] 'read index received'  (duration: 48.672113ms)","trace[970886520] 'applied index is now lower than readState.Index'  (duration: 115.708904ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T16:56:26.732653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.854654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-hostpathplugin-sa\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-09-17T16:56:26.732671Z","caller":"traceutil/trace.go:171","msg":"trace[419874815] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-hostpathplugin-sa; range_end:; response_count:1; response_revision:902; }","duration":"130.875392ms","start":"2024-09-17T16:56:26.601790Z","end":"2024-09-17T16:56:26.732666Z","steps":["trace[419874815] 'agreement among raft nodes before linearized reading'  (duration: 130.81953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:56:26.731418Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.465332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/kube-system/csi-hostpath-attacher\" ","response":"range_response_count:1 size:3509"}
	{"level":"info","ts":"2024-09-17T16:56:26.732781Z","caller":"traceutil/trace.go:171","msg":"trace[811437024] range","detail":"{range_begin:/registry/statefulsets/kube-system/csi-hostpath-attacher; range_end:; response_count:1; response_revision:902; }","duration":"165.83142ms","start":"2024-09-17T16:56:26.566945Z","end":"2024-09-17T16:56:26.732777Z","steps":["trace[811437024] 'agreement among raft nodes before linearized reading'  (duration: 164.40904ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:56:26.732918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.007743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:1 size:535"}
	{"level":"info","ts":"2024-09-17T16:56:26.732930Z","caller":"traceutil/trace.go:171","msg":"trace[1487481766] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:902; }","duration":"130.020553ms","start":"2024-09-17T16:56:26.602906Z","end":"2024-09-17T16:56:26.732926Z","steps":["trace[1487481766] 'agreement among raft nodes before linearized reading'  (duration: 129.984856ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:56:37.617636Z","caller":"traceutil/trace.go:171","msg":"trace[1663529102] transaction","detail":"{read_only:false; response_revision:983; number_of_response:1; }","duration":"133.215834ms","start":"2024-09-17T16:56:37.484410Z","end":"2024-09-17T16:56:37.617626Z","steps":["trace[1663529102] 'process raft request'  (duration: 132.999023ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:57:08.626274Z","caller":"traceutil/trace.go:171","msg":"trace[1882593247] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"132.192468ms","start":"2024-09-17T16:57:08.494071Z","end":"2024-09-17T16:57:08.626263Z","steps":["trace[1882593247] 'process raft request'  (duration: 132.016425ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:57:41.005645Z","caller":"traceutil/trace.go:171","msg":"trace[1834246192] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"144.785235ms","start":"2024-09-17T16:57:40.860845Z","end":"2024-09-17T16:57:41.005630Z","steps":["trace[1834246192] 'process raft request'  (duration: 138.176875ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:06:07.727587Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1884}
	{"level":"info","ts":"2024-09-17T17:06:07.792648Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1884,"took":"64.30126ms","hash":1757214213,"current-db-size-bytes":8650752,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4870144,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-17T17:06:07.793033Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1757214213,"revision":1884,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T17:08:52.793384Z","caller":"traceutil/trace.go:171","msg":"trace[1964056853] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2826; }","duration":"129.313643ms","start":"2024-09-17T17:08:52.664052Z","end":"2024-09-17T17:08:52.793365Z","steps":["trace[1964056853] 'process raft request'  (duration: 54.428942ms)","trace[1964056853] 'compare'  (duration: 73.872985ms)"],"step_count":2}
	
	
	==> gcp-auth [0bdf533979bf] <==
	2024/09/17 16:58:57 GCP Auth Webhook started!
	2024/09/17 16:59:13 Ready to marshal response ...
	2024/09/17 16:59:13 Ready to write response ...
	2024/09/17 16:59:14 Ready to marshal response ...
	2024/09/17 16:59:14 Ready to write response ...
	2024/09/17 16:59:38 Ready to marshal response ...
	2024/09/17 16:59:38 Ready to write response ...
	2024/09/17 16:59:38 Ready to marshal response ...
	2024/09/17 16:59:38 Ready to write response ...
	2024/09/17 16:59:38 Ready to marshal response ...
	2024/09/17 16:59:38 Ready to write response ...
	2024/09/17 17:07:51 Ready to marshal response ...
	2024/09/17 17:07:51 Ready to write response ...
	2024/09/17 17:07:52 Ready to marshal response ...
	2024/09/17 17:07:52 Ready to write response ...
	2024/09/17 17:08:10 Ready to marshal response ...
	2024/09/17 17:08:10 Ready to write response ...
	2024/09/17 17:08:41 Ready to marshal response ...
	2024/09/17 17:08:41 Ready to write response ...
	2024/09/17 17:08:41 Ready to marshal response ...
	2024/09/17 17:08:41 Ready to write response ...
	2024/09/17 17:08:52 Ready to marshal response ...
	2024/09/17 17:08:52 Ready to write response ...
	
	
	==> kernel <==
	 17:08:54 up 13 min,  0 users,  load average: 1.27, 0.73, 0.50
	Linux addons-684000 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a7a6f6370332] <==
	I0917 16:59:28.945117       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0917 16:59:29.035413       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 16:59:29.098677       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 16:59:29.323696       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0917 16:59:29.479671       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0917 16:59:29.942350       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0917 16:59:29.946117       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0917 16:59:30.054901       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0917 16:59:30.096816       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0917 16:59:30.324568       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 16:59:30.405378       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0917 17:08:00.767628       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 17:08:25.271852       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:25.271900       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:08:25.301476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:25.301538       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:08:25.317476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:25.317527       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:08:25.336374       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:25.336579       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:08:25.390386       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:08:25.390408       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 17:08:26.337379       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0917 17:08:26.391514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 17:08:26.439851       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [1cb6489a4014] <==
	E0917 17:08:30.869698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:30.943314       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:30.943340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:08:31.027092       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="3.589µs"
	W0917 17:08:32.628805       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:32.628954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:34.612891       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:34.612968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:34.775140       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:34.775173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:35.224888       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:35.224948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:08:41.086894       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0917 17:08:42.702884       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:42.702939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:42.745752       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:42.745797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:08:46.130664       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 17:08:46.130814       1 shared_informer.go:320] Caches are synced for resource quota
	W0917 17:08:46.284942       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:46.285069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:08:46.441302       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 17:08:46.441365       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 17:08:52.045075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.274µs"
	I0917 17:08:52.916643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="4.561µs"
	
	
	==> kube-proxy [c62a641e6ed7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 16:56:18.836359       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 16:56:18.845450       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.2"]
	E0917 16:56:18.845609       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:56:18.923989       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 16:56:18.924030       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 16:56:18.924047       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:56:18.928955       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 16:56:18.930287       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:56:18.930296       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:56:18.936346       1 config.go:199] "Starting service config controller"
	I0917 16:56:18.936365       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:56:18.936382       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:56:18.936386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:56:18.938955       1 config.go:328] "Starting node config controller"
	I0917 16:56:18.938982       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:56:19.037833       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:56:19.037896       1 shared_informer.go:320] Caches are synced for service config
	I0917 16:56:19.039671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [47af1958055e] <==
	W0917 16:56:07.918461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:07.918728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:07.918521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:07.918741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:07.918549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 16:56:07.918775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:07.918582       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:07.918788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:07.918614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:56:07.918822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:08.752807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:08.752865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:08.762598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 16:56:08.762782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:08.806179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:08.806339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:08.809361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:56:08.809394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:08.935628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:08.935963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:09.009928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 16:56:09.010196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:09.091655       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 16:56:09.091863       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0917 16:56:11.314325       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.051421    2051 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b155ed4ad32e5fddfac3eb48d7f9246c5ff8e064792eaabccc4cbfc4ffbbfb"
	Sep 17 17:08:52 addons-684000 kubelet[2051]: E0917 17:08:52.292163    2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef890bd4-bf38-43db-9c37-21e8fb1e16fa" containerName="busybox"
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.292217    2051 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef890bd4-bf38-43db-9c37-21e8fb1e16fa" containerName="busybox"
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.350337    2051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/baaa83e4-310e-4fd2-a7f1-64266685c791-gcp-creds\") pod \"helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1\" (UID: \"baaa83e4-310e-4fd2-a7f1-64266685c791\") " pod="local-path-storage/helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1"
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.350452    2051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbwpm\" (UniqueName: \"kubernetes.io/projected/baaa83e4-310e-4fd2-a7f1-64266685c791-kube-api-access-zbwpm\") pod \"helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1\" (UID: \"baaa83e4-310e-4fd2-a7f1-64266685c791\") " pod="local-path-storage/helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1"
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.350475    2051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/baaa83e4-310e-4fd2-a7f1-64266685c791-script\") pod \"helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1\" (UID: \"baaa83e4-310e-4fd2-a7f1-64266685c791\") " pod="local-path-storage/helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1"
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.350490    2051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/baaa83e4-310e-4fd2-a7f1-64266685c791-data\") pod \"helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1\" (UID: \"baaa83e4-310e-4fd2-a7f1-64266685c791\") " pod="local-path-storage/helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1"
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.452738    2051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcqlk\" (UniqueName: \"kubernetes.io/projected/d41ea116-b4f8-4b3d-ae06-2e78540cb794-kube-api-access-gcqlk\") pod \"d41ea116-b4f8-4b3d-ae06-2e78540cb794\" (UID: \"d41ea116-b4f8-4b3d-ae06-2e78540cb794\") "
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.460063    2051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d41ea116-b4f8-4b3d-ae06-2e78540cb794-kube-api-access-gcqlk" (OuterVolumeSpecName: "kube-api-access-gcqlk") pod "d41ea116-b4f8-4b3d-ae06-2e78540cb794" (UID: "d41ea116-b4f8-4b3d-ae06-2e78540cb794"). InnerVolumeSpecName "kube-api-access-gcqlk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.553496    2051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbb8q\" (UniqueName: \"kubernetes.io/projected/3f55bb41-a07d-4deb-a7a0-0034c2e839d0-kube-api-access-rbb8q\") pod \"3f55bb41-a07d-4deb-a7a0-0034c2e839d0\" (UID: \"3f55bb41-a07d-4deb-a7a0-0034c2e839d0\") "
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.553644    2051 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gcqlk\" (UniqueName: \"kubernetes.io/projected/d41ea116-b4f8-4b3d-ae06-2e78540cb794-kube-api-access-gcqlk\") on node \"addons-684000\" DevicePath \"\""
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.555448    2051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f55bb41-a07d-4deb-a7a0-0034c2e839d0-kube-api-access-rbb8q" (OuterVolumeSpecName: "kube-api-access-rbb8q") pod "3f55bb41-a07d-4deb-a7a0-0034c2e839d0" (UID: "3f55bb41-a07d-4deb-a7a0-0034c2e839d0"). InnerVolumeSpecName "kube-api-access-rbb8q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:52 addons-684000 kubelet[2051]: I0917 17:08:52.654788    2051 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rbb8q\" (UniqueName: \"kubernetes.io/projected/3f55bb41-a07d-4deb-a7a0-0034c2e839d0-kube-api-access-rbb8q\") on node \"addons-684000\" DevicePath \"\""
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.076823    2051 scope.go:117] "RemoveContainer" containerID="1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.114900    2051 scope.go:117] "RemoveContainer" containerID="1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: E0917 17:08:53.117490    2051 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896" containerID="1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.117585    2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896"} err="failed to get container status \"1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1256203c3b00fbe5ad815924d11d07cae34e07d8b6455f096bd0349702c1f896"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.117624    2051 scope.go:117] "RemoveContainer" containerID="d0e5f14f56e78f4a60bbff589b14cb42db555be34f8f52842527fbe2dbfd6bc0"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.145672    2051 scope.go:117] "RemoveContainer" containerID="d0e5f14f56e78f4a60bbff589b14cb42db555be34f8f52842527fbe2dbfd6bc0"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: E0917 17:08:53.146508    2051 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d0e5f14f56e78f4a60bbff589b14cb42db555be34f8f52842527fbe2dbfd6bc0" containerID="d0e5f14f56e78f4a60bbff589b14cb42db555be34f8f52842527fbe2dbfd6bc0"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.146527    2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d0e5f14f56e78f4a60bbff589b14cb42db555be34f8f52842527fbe2dbfd6bc0"} err="failed to get container status \"d0e5f14f56e78f4a60bbff589b14cb42db555be34f8f52842527fbe2dbfd6bc0\": rpc error: code = Unknown desc = Error response from daemon: No such container: d0e5f14f56e78f4a60bbff589b14cb42db555be34f8f52842527fbe2dbfd6bc0"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.301830    2051 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f55bb41-a07d-4deb-a7a0-0034c2e839d0" path="/var/lib/kubelet/pods/3f55bb41-a07d-4deb-a7a0-0034c2e839d0/volumes"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.302153    2051 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad532af-569d-4495-9b5e-3ab0750cdc0d" path="/var/lib/kubelet/pods/9ad532af-569d-4495-9b5e-3ab0750cdc0d/volumes"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.302402    2051 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d41ea116-b4f8-4b3d-ae06-2e78540cb794" path="/var/lib/kubelet/pods/d41ea116-b4f8-4b3d-ae06-2e78540cb794/volumes"
	Sep 17 17:08:53 addons-684000 kubelet[2051]: I0917 17:08:53.302688    2051 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef890bd4-bf38-43db-9c37-21e8fb1e16fa" path="/var/lib/kubelet/pods/ef890bd4-bf38-43db-9c37-21e8fb1e16fa/volumes"
	
	
	==> storage-provisioner [4ae7c3774cc9] <==
	I0917 16:56:23.261951       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:56:23.278617       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:56:23.278645       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:56:23.304031       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:56:23.304190       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-684000_45896d86-b9de-483e-904f-f376773a7a20!
	I0917 16:56:23.305985       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b83a41d-7c54-4dfe-b899-696050c6dba1", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-684000_45896d86-b9de-483e-904f-f376773a7a20 became leader
	I0917 16:56:23.405714       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-684000_45896d86-b9de-483e-904f-f376773a7a20!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-684000 -n addons-684000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-684000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-lk6lc ingress-nginx-admission-patch-x9d4r helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-684000 describe pod busybox ingress-nginx-admission-create-lk6lc ingress-nginx-admission-patch-x9d4r helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-684000 describe pod busybox ingress-nginx-admission-create-lk6lc ingress-nginx-admission-patch-x9d4r helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1: exit status 1 (55.248937ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-684000/192.169.0.2
	Start Time:       Tue, 17 Sep 2024 09:59:38 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbjbc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bbjbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-684000
	  Normal   Pulling    7m51s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m51s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m51s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m38s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m16s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lk6lc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-x9d4r" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-684000 describe pod busybox ingress-nginx-admission-create-lk6lc ingress-nginx-admission-patch-x9d4r helper-pod-delete-pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.11s)

                                                
                                    
x
+
TestCertOptions (251.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-836000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0917 11:01:16.995928    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:01:19.988168    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:01:44.718466    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-836000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.12587162s)

                                                
                                                
-- stdout --
	* [cert-options-836000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-836000" primary control-plane node in "cert-options-836000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-836000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:aa:a:54:f1:8
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-836000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:e1:f5:4a:6f:5a
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:e1:f5:4a:6f:5a
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-836000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-836000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-836000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (164.803864ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-836000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-836000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-836000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-836000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-836000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (162.834069ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-836000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-836000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-836000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-17 11:04:11.084931 -0700 PDT m=+4152.981209093
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-836000 -n cert-options-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-836000 -n cert-options-836000: exit status 7 (78.859944ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 11:04:11.162103    6741 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 11:04:11.162127    6741 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-836000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-836000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-836000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-836000: (5.236941849s)
--- FAIL: TestCertOptions (251.81s)

                                                
                                    
x
+
TestCertExpiration (1805.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-489000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-489000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.810269476s)

                                                
                                                
-- stdout --
	* [cert-expiration-489000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-489000" primary control-plane node in "cert-expiration-489000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-489000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:a7:97:c6:c:cb
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-489000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:10:38:74:e9:b3
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:10:38:74:e9:b3
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-489000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
E0917 11:03:41.642071    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:03:58.525124    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-489000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-489000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : signal: killed (22m53.190560459s)

                                                
                                                
-- stdout --
	* [cert-expiration-489000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-489000" primary control-plane node in "cert-expiration-489000" cluster
	* Updating the running hyperkit "cert-expiration-489000" VM ...
	* Updating the running hyperkit "cert-expiration-489000" VM ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-489000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : signal: killed
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-489000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-489000" primary control-plane node in "cert-expiration-489000" cluster
	* Updating the running hyperkit "cert-expiration-489000" VM ...
	* Updating the running hyperkit "cert-expiration-489000" VM ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-17 11:29:01.065194 -0700 PDT m=+5642.898168485
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-489000 -n cert-expiration-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-489000 -n cert-expiration-489000: exit status 7 (81.444923ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 11:29:01.144622    7588 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 11:29:01.144645    7588 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-489000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-489000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-489000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-489000: (5.257014114s)
--- FAIL: TestCertExpiration (1805.34s)

                                                
                                    
x
+
TestDockerFlags (252.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-702000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0917 10:56:16.996675    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:17.003838    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:17.017214    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:17.040617    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:17.083474    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:17.166824    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:17.330192    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:17.653610    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:18.297072    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:19.580283    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:19.989594    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:22.143713    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:27.267150    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:37.508973    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:56:57.990744    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:57:38.953844    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-702000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.679145525s)

                                                
                                                
-- stdout --
	* [docker-flags-702000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-702000" primary control-plane node in "docker-flags-702000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-702000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:55:52.213949    6290 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:55:52.214136    6290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:52.214141    6290 out.go:358] Setting ErrFile to fd 2...
	I0917 10:55:52.214145    6290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:55:52.214333    6290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:55:52.215911    6290 out.go:352] Setting JSON to false
	I0917 10:55:52.238757    6290 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5119,"bootTime":1726590633,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:55:52.238907    6290 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:55:52.261839    6290 out.go:177] * [docker-flags-702000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:55:52.303848    6290 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:55:52.303877    6290 notify.go:220] Checking for updates...
	I0917 10:55:52.346567    6290 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:55:52.367855    6290 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:55:52.388839    6290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:55:52.409595    6290 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:55:52.430853    6290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:55:52.452330    6290 config.go:182] Loaded profile config "force-systemd-flag-812000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:55:52.452421    6290 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:55:52.481683    6290 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 10:55:52.523915    6290 start.go:297] selected driver: hyperkit
	I0917 10:55:52.523932    6290 start.go:901] validating driver "hyperkit" against <nil>
	I0917 10:55:52.523942    6290 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:55:52.527040    6290 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:52.527166    6290 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:55:52.535674    6290 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:55:52.539641    6290 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:55:52.539660    6290 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:55:52.539704    6290 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:55:52.539924    6290 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0917 10:55:52.539960    6290 cni.go:84] Creating CNI manager for ""
	I0917 10:55:52.540001    6290 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:55:52.540012    6290 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:55:52.540081    6290 start.go:340] cluster config:
	{Name:docker-flags-702000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-702000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:55:52.540173    6290 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:55:52.582899    6290 out.go:177] * Starting "docker-flags-702000" primary control-plane node in "docker-flags-702000" cluster
	I0917 10:55:52.603957    6290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:55:52.603994    6290 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:55:52.604012    6290 cache.go:56] Caching tarball of preloaded images
	I0917 10:55:52.604130    6290 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:55:52.604139    6290 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:55:52.604223    6290 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/docker-flags-702000/config.json ...
	I0917 10:55:52.604240    6290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/docker-flags-702000/config.json: {Name:mk6580841395a66c08570658e35e7ebaa4e17b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:55:52.604550    6290 start.go:360] acquireMachinesLock for docker-flags-702000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:56:49.428350    6290 start.go:364] duration metric: took 56.823962561s to acquireMachinesLock for "docker-flags-702000"
	I0917 10:56:49.428389    6290 start.go:93] Provisioning new machine with config: &{Name:docker-flags-702000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-702000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:56:49.428442    6290 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 10:56:49.449828    6290 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:56:49.450011    6290 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:56:49.450055    6290 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:56:49.458833    6290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53789
	I0917 10:56:49.459192    6290 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:56:49.459586    6290 main.go:141] libmachine: Using API Version  1
	I0917 10:56:49.459597    6290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:56:49.459839    6290 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:56:49.459968    6290 main.go:141] libmachine: (docker-flags-702000) Calling .GetMachineName
	I0917 10:56:49.460060    6290 main.go:141] libmachine: (docker-flags-702000) Calling .DriverName
	I0917 10:56:49.460172    6290 start.go:159] libmachine.API.Create for "docker-flags-702000" (driver="hyperkit")
	I0917 10:56:49.460196    6290 client.go:168] LocalClient.Create starting
	I0917 10:56:49.460228    6290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 10:56:49.460278    6290 main.go:141] libmachine: Decoding PEM data...
	I0917 10:56:49.460294    6290 main.go:141] libmachine: Parsing certificate...
	I0917 10:56:49.460352    6290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 10:56:49.460389    6290 main.go:141] libmachine: Decoding PEM data...
	I0917 10:56:49.460399    6290 main.go:141] libmachine: Parsing certificate...
	I0917 10:56:49.460412    6290 main.go:141] libmachine: Running pre-create checks...
	I0917 10:56:49.460419    6290 main.go:141] libmachine: (docker-flags-702000) Calling .PreCreateCheck
	I0917 10:56:49.460501    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:49.460727    6290 main.go:141] libmachine: (docker-flags-702000) Calling .GetConfigRaw
	I0917 10:56:49.492903    6290 main.go:141] libmachine: Creating machine...
	I0917 10:56:49.492911    6290 main.go:141] libmachine: (docker-flags-702000) Calling .Create
	I0917 10:56:49.492994    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:49.493111    6290 main.go:141] libmachine: (docker-flags-702000) DBG | I0917 10:56:49.492987    6313 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:56:49.493182    6290 main.go:141] libmachine: (docker-flags-702000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 10:56:49.715856    6290 main.go:141] libmachine: (docker-flags-702000) DBG | I0917 10:56:49.715745    6313 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/id_rsa...
	I0917 10:56:49.875342    6290 main.go:141] libmachine: (docker-flags-702000) DBG | I0917 10:56:49.875271    6313 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/docker-flags-702000.rawdisk...
	I0917 10:56:49.875353    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Writing magic tar header
	I0917 10:56:49.875365    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Writing SSH key tar header
	I0917 10:56:49.875926    6290 main.go:141] libmachine: (docker-flags-702000) DBG | I0917 10:56:49.875882    6313 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000 ...
	I0917 10:56:50.238591    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:50.238618    6290 main.go:141] libmachine: (docker-flags-702000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/hyperkit.pid
	I0917 10:56:50.238631    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Using UUID 9a620f20-3efd-4f93-afc8-3acb30baf66b
	I0917 10:56:50.263447    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Generated MAC ba:da:75:56:5c:b5
	I0917 10:56:50.263464    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-702000
	I0917 10:56:50.263499    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9a620f20-3efd-4f93-afc8-3acb30baf66b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 10:56:50.263539    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9a620f20-3efd-4f93-afc8-3acb30baf66b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 10:56:50.263588    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9a620f20-3efd-4f93-afc8-3acb30baf66b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/docker-flags-702000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/bzimage,/Users/jenkins/m
inikube-integration/19662-1558/.minikube/machines/docker-flags-702000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-702000"}
	I0917 10:56:50.263630    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9a620f20-3efd-4f93-afc8-3acb30baf66b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/docker-flags-702000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags
-702000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-702000"
	I0917 10:56:50.263639    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:56:50.266479    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 DEBUG: hyperkit: Pid is 6314
	I0917 10:56:50.266937    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 0
	I0917 10:56:50.266954    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:50.267045    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:56:50.268085    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:56:50.268181    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:50.268204    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:50.268223    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:50.268236    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:50.268247    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:50.268258    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:50.268279    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:50.268291    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:50.268303    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:50.268318    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:50.268348    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:50.268368    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:50.268384    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:50.268399    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:50.268413    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:50.268427    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:50.268440    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:50.268454    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:50.274489    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:56:50.282499    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:56:50.283410    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:56:50.283433    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:56:50.283446    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:56:50.283455    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:56:50.657648    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:56:50.657673    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:56:50.772363    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:56:50.772383    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:56:50.772394    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:56:50.772402    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:56:50.773254    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:56:50.773274    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:56:52.269127    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 1
	I0917 10:56:52.269142    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:52.269240    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:56:52.270153    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:56:52.270202    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:52.270211    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:52.270221    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:52.270230    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:52.270238    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:52.270267    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:52.270280    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:52.270294    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:52.270301    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:52.270316    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:52.270327    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:52.270335    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:52.270343    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:52.270350    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:52.270357    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:52.270364    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:52.270383    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:52.270400    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:54.272186    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 2
	I0917 10:56:54.272199    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:54.272249    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:56:54.273245    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:56:54.273296    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:54.273312    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:54.273327    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:54.273334    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:54.273345    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:54.273353    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:54.273380    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:54.273398    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:54.273411    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:54.273426    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:54.273435    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:54.273447    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:54.273455    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:54.273464    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:54.273472    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:54.273478    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:54.273484    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:54.273498    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:56.153278    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:56:56.153406    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:56:56.153415    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:56:56.173495    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:56:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:56:56.275546    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 3
	I0917 10:56:56.275567    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:56.275723    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:56:56.276935    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:56:56.277008    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:56.277018    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:56.277029    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:56.277036    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:56.277060    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:56.277081    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:56.277103    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:56.277154    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:56.277167    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:56.277178    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:56.277187    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:56.277196    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:56.277212    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:56.277247    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:56.277261    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:56.277295    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:56.277310    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:56.277322    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:58.279196    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 4
	I0917 10:56:58.279210    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:58.279315    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:56:58.280209    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:56:58.280255    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:58.280292    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:58.280303    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:58.280310    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:58.280316    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:58.280338    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:58.280349    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:58.280358    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:58.280365    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:58.280373    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:58.280380    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:58.280389    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:58.280408    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:58.280417    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:58.280425    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:58.280433    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:58.280447    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:58.280455    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:00.281695    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 5
	I0917 10:57:00.281707    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:00.281746    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:00.282808    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:00.282850    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:00.282862    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:00.282871    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:00.282882    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:00.282895    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:00.282915    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:00.282925    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:00.282933    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:00.282940    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:00.282949    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:00.282956    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:00.282963    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:00.282970    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:00.282986    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:00.282999    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:00.283013    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:00.283023    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:00.283031    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:02.284378    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 6
	I0917 10:57:02.284390    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:02.284480    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:02.285372    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:02.285420    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:02.285433    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:02.285451    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:02.285460    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:02.285467    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:02.285477    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:02.285489    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:02.285499    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:02.285506    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:02.285512    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:02.285521    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:02.285531    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:02.285538    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:02.285545    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:02.285553    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:02.285559    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:02.285564    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:02.285571    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:04.286558    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 7
	I0917 10:57:04.286569    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:04.286612    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:04.287545    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:04.287584    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:04.287592    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:04.287601    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:04.287611    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:04.287617    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:04.287622    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:04.287628    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:04.287634    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:04.287641    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:04.287648    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:04.287654    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:04.287660    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:04.287674    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:04.287688    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:04.287696    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:04.287703    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:04.287715    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:04.287723    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:06.288090    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 8
	I0917 10:57:06.288103    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:06.288158    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:06.289053    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:06.289105    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:06.289115    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:06.289127    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:06.289134    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:06.289150    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:06.289162    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:06.289171    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:06.289179    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:06.289186    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:06.289191    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:06.289208    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:06.289218    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:06.289225    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:06.289234    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:06.289242    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:06.289253    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:06.289259    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:06.289268    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:08.291342    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 9
	I0917 10:57:08.291356    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:08.291401    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:08.292380    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:08.292410    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:08.292419    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:08.292426    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:08.292432    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:08.292446    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:08.292455    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:08.292464    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:08.292473    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:08.292479    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:08.292492    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:08.292500    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:08.292507    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:08.292514    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:08.292521    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:08.292528    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:08.292533    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:08.292550    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:08.292561    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:10.294577    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 10
	I0917 10:57:10.294600    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:10.294657    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:10.295542    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:10.295590    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:10.295602    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:10.295616    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:10.295624    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:10.295630    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:10.295635    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:10.295641    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:10.295646    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:10.295660    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:10.295667    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:10.295684    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:10.295696    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:10.295705    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:10.295710    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:10.295717    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:10.295723    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:10.295730    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:10.295739    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:12.295820    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 11
	I0917 10:57:12.295843    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:12.295894    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:12.296826    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:12.296885    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:12.296910    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:12.296926    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:12.296953    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:12.296963    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:12.296973    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:12.296982    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:12.296993    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:12.297004    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:12.297015    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:12.297023    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:12.297031    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:12.297048    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:12.297055    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:12.297062    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:12.297067    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:12.297074    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:12.297081    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:14.299093    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 12
	I0917 10:57:14.299105    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:14.299172    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:14.300133    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:14.300155    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:14.300173    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:14.300187    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:14.300201    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:14.300216    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:14.300228    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:14.300245    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:14.300257    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:14.300266    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:14.300273    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:14.300280    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:14.300287    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:14.300294    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:14.300300    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:14.300306    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:14.300311    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:14.300361    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:14.300396    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:16.302348    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 13
	I0917 10:57:16.302362    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:16.302381    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:16.303290    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:16.303334    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:16.303351    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:16.303361    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:16.303367    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:16.303375    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:16.303380    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:16.303392    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:16.303415    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:16.303429    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:16.303439    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:16.303448    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:16.303456    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:16.303463    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:16.303469    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:16.303475    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:16.303480    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:16.303496    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:16.303507    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:18.305519    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 14
	I0917 10:57:18.305532    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:18.305596    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:18.306477    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:18.306525    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:18.306534    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:18.306542    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:18.306548    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:18.306555    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:18.306560    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:18.306582    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:18.306593    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:18.306611    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:18.306619    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:18.306635    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:18.306649    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:18.306658    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:18.306666    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:18.306673    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:18.306680    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:18.306693    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:18.306705    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:20.307680    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 15
	I0917 10:57:20.307692    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:20.307746    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:20.308632    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:20.308681    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:20.308690    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:20.308698    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:20.308705    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:20.308711    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:20.308717    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:20.308723    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:20.308729    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:20.308735    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:20.308744    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:20.308753    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:20.308759    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:20.308766    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:20.308775    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:20.308789    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:20.308802    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:20.308819    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:20.308830    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:22.309052    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 16
	I0917 10:57:22.309066    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:22.309136    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:22.310031    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:22.310108    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:22.310120    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:22.310130    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:22.310135    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:22.310142    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:22.310147    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:22.310176    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:22.310207    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:22.310248    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:22.310256    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:22.310263    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:22.310269    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:22.310283    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:22.310294    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:22.310304    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:22.310310    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:22.310316    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:22.310323    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:24.312219    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 17
	I0917 10:57:24.312234    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:24.312347    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:24.313214    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:24.313263    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:24.313280    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:24.313291    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:24.313301    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:24.313311    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:24.313319    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:24.313325    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:24.313332    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:24.313338    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:24.313344    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:24.313349    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:24.313355    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:24.313402    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:24.313435    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:24.313465    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:24.313480    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:24.313488    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:24.313500    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:26.314273    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 18
	I0917 10:57:26.314287    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:26.314345    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:26.315223    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:26.315285    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:26.315300    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:26.315308    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:26.315314    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:26.315321    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:26.315326    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:26.315351    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:26.315363    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:26.315370    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:26.315377    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:26.315389    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:26.315400    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:26.315412    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:26.315420    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:26.315427    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:26.315435    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:26.315441    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:26.315450    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:28.315992    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 19
	I0917 10:57:28.316004    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:28.316052    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:28.316966    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:28.317045    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:28.317055    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:28.317065    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:28.317082    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:28.317095    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:28.317108    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:28.317124    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:28.317136    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:28.317143    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:28.317152    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:28.317158    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:28.317164    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:28.317178    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:28.317187    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:28.317193    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:28.317201    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:28.317208    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:28.317215    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:30.317269    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 20
	I0917 10:57:30.317287    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:30.317346    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:30.318326    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:30.318366    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:30.318373    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:30.318394    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:30.318402    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:30.318416    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:30.318432    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:30.318440    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:30.318448    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:30.318454    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:30.318463    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:30.318469    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:30.318477    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:30.318485    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:30.318491    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:30.318496    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:30.318503    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:30.318508    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:30.318528    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:32.318622    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 21
	I0917 10:57:32.318633    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:32.318750    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:32.319688    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:32.319702    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:32.319708    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:32.319716    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:32.319722    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:32.319731    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:32.319746    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:32.319756    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:32.319763    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:32.319768    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:32.319776    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:32.319784    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:32.319797    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:32.319808    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:32.319818    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:32.319827    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:32.319840    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:32.319851    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:32.319861    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:34.320478    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 22
	I0917 10:57:34.320488    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:34.320582    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:34.321457    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:34.321511    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:34.321521    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:34.321530    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:34.321537    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:34.321548    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:34.321554    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:34.321560    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:34.321565    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:34.321580    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:34.321591    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:34.321602    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:34.321621    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:34.321627    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:34.321637    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:34.321649    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:34.321656    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:34.321668    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:34.321677    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:36.323084    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 23
	I0917 10:57:36.323100    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:36.323145    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:36.324044    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:36.324090    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:36.324104    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:36.324121    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:36.324132    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:36.324144    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:36.324153    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:36.324161    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:36.324168    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:36.324175    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:36.324182    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:36.324198    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:36.324210    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:36.324232    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:36.324264    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:36.324270    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:36.324278    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:36.324285    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:36.324293    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:38.326294    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 24
	I0917 10:57:38.326308    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:38.326362    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:38.327256    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:38.327302    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:38.327325    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:38.327337    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:38.327350    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:38.327380    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:38.327394    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:38.327407    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:38.327419    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:38.327435    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:38.327446    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:38.327453    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:38.327461    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:38.327468    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:38.327475    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:38.327486    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:38.327494    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:38.327503    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:38.327511    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:40.328605    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 25
	I0917 10:57:40.328629    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:40.328677    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:40.329603    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:40.329623    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:40.329639    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:40.329654    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:40.329662    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:40.329670    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:40.329679    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:40.329689    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:40.329697    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:40.329704    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:40.329709    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:40.329716    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:40.329723    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:40.329729    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:40.329736    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:40.329757    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:40.329773    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:40.329788    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:40.329800    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:42.329811    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 26
	I0917 10:57:42.329824    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:42.329869    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:42.330958    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:42.331057    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:42.331065    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:42.331083    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:42.331099    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:42.331110    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:42.331120    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:42.331128    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:42.331134    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:42.331140    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:42.331146    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:42.331158    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:42.331165    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:42.331172    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:42.331178    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:42.331183    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:42.331197    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:42.331208    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:42.331217    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:44.331528    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 27
	I0917 10:57:44.331544    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:44.331654    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:44.332526    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:44.332592    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:44.332605    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:44.332613    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:44.332622    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:44.332628    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:44.332636    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:44.332641    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:44.332647    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:44.332653    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:44.332660    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:44.332667    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:44.332673    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:44.332679    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:44.332684    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:44.332698    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:44.332710    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:44.332718    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:44.332723    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:46.333676    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 28
	I0917 10:57:46.333692    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:46.333741    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:46.334695    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:46.334747    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:46.334764    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:46.334782    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:46.334792    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:46.334799    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:46.334819    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:46.334833    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:46.334845    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:46.334852    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:46.334858    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:46.334867    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:46.334873    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:46.334881    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:46.334887    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:46.334896    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:46.334904    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:46.334912    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:46.334929    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:48.336929    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 29
	I0917 10:57:48.336953    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:48.336991    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:48.337861    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for ba:da:75:56:5c:b5 in /var/db/dhcpd_leases ...
	I0917 10:57:48.337915    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:48.337926    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:48.337935    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:48.337941    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:48.337947    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:48.337952    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:48.337959    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:48.337972    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:48.337979    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:48.337985    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:48.337997    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:48.338009    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:48.338017    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:48.338026    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:48.338032    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:48.338039    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:48.338046    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:48.338052    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:50.340143    6290 client.go:171] duration metric: took 1m0.880109341s to LocalClient.Create
	I0917 10:57:52.341248    6290 start.go:128] duration metric: took 1m2.912964802s to createHost
	I0917 10:57:52.341268    6290 start.go:83] releasing machines lock for "docker-flags-702000", held for 1m2.913085984s
	W0917 10:57:52.341281    6290 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:da:75:56:5c:b5
	I0917 10:57:52.341627    6290 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:57:52.341644    6290 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:57:52.350396    6290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53791
	I0917 10:57:52.350804    6290 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:57:52.351146    6290 main.go:141] libmachine: Using API Version  1
	I0917 10:57:52.351155    6290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:57:52.351361    6290 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:57:52.351720    6290 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:57:52.351740    6290 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:57:52.360158    6290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53793
	I0917 10:57:52.360510    6290 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:57:52.360884    6290 main.go:141] libmachine: Using API Version  1
	I0917 10:57:52.360906    6290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:57:52.361106    6290 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:57:52.361223    6290 main.go:141] libmachine: (docker-flags-702000) Calling .GetState
	I0917 10:57:52.361314    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:52.361381    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:52.362463    6290 main.go:141] libmachine: (docker-flags-702000) Calling .DriverName
	I0917 10:57:52.425545    6290 out.go:177] * Deleting "docker-flags-702000" in hyperkit ...
	I0917 10:57:52.446571    6290 main.go:141] libmachine: (docker-flags-702000) Calling .Remove
	I0917 10:57:52.446705    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:52.446714    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:52.446781    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:52.447873    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:52.447942    6290 main.go:141] libmachine: (docker-flags-702000) DBG | waiting for graceful shutdown
	I0917 10:57:53.450077    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:53.450176    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:53.451275    6290 main.go:141] libmachine: (docker-flags-702000) DBG | waiting for graceful shutdown
	I0917 10:57:54.451734    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:54.451811    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:54.453470    6290 main.go:141] libmachine: (docker-flags-702000) DBG | waiting for graceful shutdown
	I0917 10:57:55.453662    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:55.453756    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:55.454421    6290 main.go:141] libmachine: (docker-flags-702000) DBG | waiting for graceful shutdown
	I0917 10:57:56.456286    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:56.456356    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:56.457391    6290 main.go:141] libmachine: (docker-flags-702000) DBG | waiting for graceful shutdown
	I0917 10:57:57.457812    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:57.457845    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6314
	I0917 10:57:57.458562    6290 main.go:141] libmachine: (docker-flags-702000) DBG | sending sigkill
	I0917 10:57:57.458574    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:57.470492    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:57:57 WARN : hyperkit: failed to read stdout: EOF
	I0917 10:57:57.470514    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:57:57 WARN : hyperkit: failed to read stderr: EOF
	W0917 10:57:57.486642    6290 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:da:75:56:5c:b5
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:da:75:56:5c:b5
	I0917 10:57:57.486662    6290 start.go:729] Will try again in 5 seconds ...
	I0917 10:58:02.487577    6290 start.go:360] acquireMachinesLock for docker-flags-702000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:58:55.244660    6290 start.go:364] duration metric: took 52.757203489s to acquireMachinesLock for "docker-flags-702000"
	I0917 10:58:55.244698    6290 start.go:93] Provisioning new machine with config: &{Name:docker-flags-702000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-702000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:58:55.244774    6290 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 10:58:55.287055    6290 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:58:55.287136    6290 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:58:55.287153    6290 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:58:55.295711    6290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53797
	I0917 10:58:55.296066    6290 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:58:55.296443    6290 main.go:141] libmachine: Using API Version  1
	I0917 10:58:55.296460    6290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:58:55.296696    6290 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:58:55.296817    6290 main.go:141] libmachine: (docker-flags-702000) Calling .GetMachineName
	I0917 10:58:55.296907    6290 main.go:141] libmachine: (docker-flags-702000) Calling .DriverName
	I0917 10:58:55.297014    6290 start.go:159] libmachine.API.Create for "docker-flags-702000" (driver="hyperkit")
	I0917 10:58:55.297044    6290 client.go:168] LocalClient.Create starting
	I0917 10:58:55.297071    6290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 10:58:55.297122    6290 main.go:141] libmachine: Decoding PEM data...
	I0917 10:58:55.297136    6290 main.go:141] libmachine: Parsing certificate...
	I0917 10:58:55.297179    6290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 10:58:55.297219    6290 main.go:141] libmachine: Decoding PEM data...
	I0917 10:58:55.297230    6290 main.go:141] libmachine: Parsing certificate...
	I0917 10:58:55.297241    6290 main.go:141] libmachine: Running pre-create checks...
	I0917 10:58:55.297247    6290 main.go:141] libmachine: (docker-flags-702000) Calling .PreCreateCheck
	I0917 10:58:55.297388    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:55.297414    6290 main.go:141] libmachine: (docker-flags-702000) Calling .GetConfigRaw
	I0917 10:58:55.308057    6290 main.go:141] libmachine: Creating machine...
	I0917 10:58:55.308066    6290 main.go:141] libmachine: (docker-flags-702000) Calling .Create
	I0917 10:58:55.308194    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:55.308349    6290 main.go:141] libmachine: (docker-flags-702000) DBG | I0917 10:58:55.308179    6333 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:58:55.308395    6290 main.go:141] libmachine: (docker-flags-702000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 10:58:55.724719    6290 main.go:141] libmachine: (docker-flags-702000) DBG | I0917 10:58:55.724661    6333 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/id_rsa...
	I0917 10:58:56.165268    6290 main.go:141] libmachine: (docker-flags-702000) DBG | I0917 10:58:56.165178    6333 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/docker-flags-702000.rawdisk...
	I0917 10:58:56.165281    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Writing magic tar header
	I0917 10:58:56.165292    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Writing SSH key tar header
	I0917 10:58:56.165885    6290 main.go:141] libmachine: (docker-flags-702000) DBG | I0917 10:58:56.165838    6333 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000 ...
	I0917 10:58:56.532893    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:56.532913    6290 main.go:141] libmachine: (docker-flags-702000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/hyperkit.pid
	I0917 10:58:56.532942    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Using UUID 1909333d-830c-480a-878d-3f7cac53fa21
	I0917 10:58:56.560837    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Generated MAC c6:63:7c:93:82:8
	I0917 10:58:56.560855    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-702000
	I0917 10:58:56.560887    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1909333d-830c-480a-878d-3f7cac53fa21", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011e330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 10:58:56.560917    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1909333d-830c-480a-878d-3f7cac53fa21", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011e330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 10:58:56.560965    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1909333d-830c-480a-878d-3f7cac53fa21", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/docker-flags-702000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/bzimage,/Users/jenkins/m
inikube-integration/19662-1558/.minikube/machines/docker-flags-702000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-702000"}
	I0917 10:58:56.561010    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1909333d-830c-480a-878d-3f7cac53fa21 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/docker-flags-702000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags
-702000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-702000"
	I0917 10:58:56.561058    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:58:56.564183    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 DEBUG: hyperkit: Pid is 6347
	I0917 10:58:56.564655    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 0
	I0917 10:58:56.564671    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:56.564752    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:58:56.565854    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:58:56.565869    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:56.565892    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:56.565910    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:56.565925    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:56.565943    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:56.565955    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:56.565996    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:56.566016    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:56.566066    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:56.566099    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:56.566108    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:56.566117    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:56.566136    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:56.566152    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:56.566168    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:56.566180    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:56.566192    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:56.566204    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:56.572231    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:58:56.580211    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/docker-flags-702000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:58:56.581073    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:58:56.581085    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:58:56.581111    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:58:56.581125    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:58:56.960759    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:58:56.960774    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:58:57.075375    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:58:57.075399    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:58:57.075410    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:58:57.075418    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:58:57.076246    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:58:57.076260    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:58:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:58:58.568094    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 1
	I0917 10:58:58.568107    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:58.568148    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:58:58.569090    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:58:58.569101    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:58.569129    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:58.569139    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:58.569156    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:58.569182    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:58.569194    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:58.569201    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:58.569213    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:58.569223    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:58.569233    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:58.569241    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:58.569247    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:58.569253    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:58.569259    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:58.569267    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:58.569273    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:58.569280    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:58.569288    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:00.569359    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 2
	I0917 10:59:00.569380    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:00.569440    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:00.570419    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:00.570461    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:00.570469    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:00.570478    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:00.570491    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:00.570497    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:00.570504    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:00.570512    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:00.570519    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:00.570524    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:00.570531    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:00.570537    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:00.570544    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:00.570557    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:00.570563    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:00.570574    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:00.570581    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:00.570588    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:00.570607    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:02.446434    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:59:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:59:02.446618    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:59:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:59:02.446629    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:59:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:59:02.466536    6290 main.go:141] libmachine: (docker-flags-702000) DBG | 2024/09/17 10:59:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:59:02.572724    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 3
	I0917 10:59:02.572746    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:02.572959    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:02.574579    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:02.574675    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:02.574707    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:02.574717    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:02.574729    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:02.574757    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:02.574773    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:02.574787    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:02.574797    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:02.574817    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:02.574832    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:02.574854    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:02.574870    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:02.574880    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:02.574890    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:02.574901    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:02.574911    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:02.574920    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:02.574929    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:04.575348    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 4
	I0917 10:59:04.575362    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:04.575467    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:04.576342    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:04.576392    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:04.576401    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:04.576409    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:04.576416    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:04.576424    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:04.576431    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:04.576443    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:04.576453    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:04.576460    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:04.576494    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:04.576516    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:04.576530    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:04.576540    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:04.576549    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:04.576558    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:04.576564    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:04.576570    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:04.576586    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:06.578609    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 5
	I0917 10:59:06.578629    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:06.578690    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:06.579552    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:06.579586    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:06.579602    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:06.579644    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:06.579653    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:06.579660    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:06.579677    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:06.579685    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:06.579691    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:06.579703    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:06.579717    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:06.579724    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:06.579732    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:06.579749    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:06.579757    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:06.579764    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:06.579771    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:06.579782    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:06.579790    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:08.580985    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 6
	I0917 10:59:08.580999    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:08.581064    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:08.581942    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:08.582010    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:08.582022    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:08.582038    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:08.582049    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:08.582058    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:08.582070    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:08.582086    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:08.582095    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:08.582102    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:08.582108    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:08.582123    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:08.582134    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:08.582142    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:08.582153    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:08.582163    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:08.582169    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:08.582182    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:08.582189    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:10.584172    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 7
	I0917 10:59:10.584184    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:10.584232    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:10.585103    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:10.585154    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:10.585164    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:10.585179    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:10.585189    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:10.585196    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:10.585202    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:10.585208    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:10.585215    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:10.585226    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:10.585234    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:10.585241    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:10.585248    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:10.585265    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:10.585277    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:10.585285    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:10.585294    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:10.585300    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:10.585308    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:12.585886    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 8
	I0917 10:59:12.585900    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:12.585942    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:12.586815    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:12.586861    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:12.586871    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:12.586886    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:12.586893    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:12.586901    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:12.586912    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:12.586935    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:12.586954    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:12.586970    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:12.586981    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:12.586990    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:12.586997    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:12.587010    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:12.587022    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:12.587036    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:12.587048    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:12.587064    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:12.587076    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:14.589063    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 9
	I0917 10:59:14.589078    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:14.589128    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:14.590054    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:14.590088    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:14.590101    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:14.590114    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:14.590123    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:14.590130    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:14.590136    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:14.590156    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:14.590170    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:14.590179    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:14.590187    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:14.590194    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:14.590204    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:14.590211    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:14.590224    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:14.590231    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:14.590238    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:14.590244    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:14.590250    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:16.590889    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 10
	I0917 10:59:16.590904    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:16.590956    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:16.591780    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:16.591831    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:16.591842    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:16.591850    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:16.591856    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:16.591862    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:16.591870    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:16.591876    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:16.591888    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:16.591894    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:16.591900    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:16.591906    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:16.591913    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:16.591921    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:16.591931    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:16.591947    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:16.591955    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:16.591961    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:16.591967    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:18.594017    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 11
	I0917 10:59:18.594030    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:18.594090    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:18.594957    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:18.595021    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:18.595033    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:18.595041    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:18.595057    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:18.595067    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:18.595073    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:18.595079    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:18.595086    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:18.595093    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:18.595100    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:18.595105    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:18.595112    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:18.595120    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:18.595128    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:18.595137    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:18.595149    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:18.595165    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:18.595178    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:20.597181    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 12
	I0917 10:59:20.597194    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:20.597239    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:20.598223    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:20.598257    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:20.598266    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:20.598274    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:20.598280    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:20.598301    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:20.598307    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:20.598324    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:20.598333    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:20.598341    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:20.598348    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:20.598355    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:20.598360    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:20.598366    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:20.598372    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:20.598378    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:20.598391    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:20.598419    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:20.598428    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:22.600436    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 13
	I0917 10:59:22.600450    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:22.600519    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:22.601415    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:22.601451    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:22.601459    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:22.601477    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:22.601485    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:22.601493    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:22.601498    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:22.601512    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:22.601525    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:22.601534    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:22.601542    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:22.601548    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:22.601554    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:22.601568    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:22.601579    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:22.601593    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:22.601601    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:22.601608    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:22.601626    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:24.603650    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 14
	I0917 10:59:24.603664    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:24.603721    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:24.604711    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:24.604743    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:24.604752    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:24.604762    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:24.604789    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:24.604799    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:24.604809    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:24.604816    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:24.604839    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:24.604851    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:24.604859    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:24.604888    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:24.604900    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:24.604911    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:24.604919    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:24.604926    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:24.604934    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:24.604941    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:24.604948    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:26.606952    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 15
	I0917 10:59:26.606967    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:26.606995    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:26.607861    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:26.607906    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:26.607913    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:26.607925    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:26.607933    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:26.607946    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:26.607958    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:26.607966    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:26.607975    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:26.607992    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:26.608006    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:26.608015    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:26.608025    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:26.608033    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:26.608039    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:26.608047    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:26.608058    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:26.608076    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:26.608084    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:28.610103    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 16
	I0917 10:59:28.610118    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:28.610169    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:28.611116    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:28.611141    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:28.611148    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:28.611161    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:28.611167    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:28.611179    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:28.611186    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:28.611192    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:28.611198    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:28.611229    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:28.611238    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:28.611245    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:28.611259    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:28.611280    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:28.611294    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:28.611301    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:28.611308    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:28.611324    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:28.611337    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:30.611948    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 17
	I0917 10:59:30.611967    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:30.612047    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:30.612924    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:30.612968    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:30.612978    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:30.612991    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:30.613004    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:30.613012    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:30.613018    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:30.613024    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:30.613029    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:30.613046    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:30.613060    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:30.613077    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:30.613090    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:30.613101    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:30.613111    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:30.613123    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:30.613132    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:30.613139    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:30.613145    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:32.614077    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 18
	I0917 10:59:32.614089    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:32.614153    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:32.615012    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:32.615075    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:32.615087    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:32.615101    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:32.615139    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:32.615147    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:32.615156    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:32.615164    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:32.615178    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:32.615195    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:32.615219    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:32.615250    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:32.615258    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:32.615263    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:32.615270    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:32.615282    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:32.615290    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:32.615297    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:32.615305    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:34.616829    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 19
	I0917 10:59:34.616843    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:34.616947    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:34.617830    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:34.617895    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:34.617904    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:34.617911    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:34.617918    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:34.617937    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:34.617946    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:34.617955    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:34.617964    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:34.617970    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:34.617986    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:34.617994    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:34.618001    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:34.618007    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:34.618026    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:34.618038    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:34.618047    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:34.618054    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:34.618068    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:36.620075    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 20
	I0917 10:59:36.620089    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:36.620149    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:36.621044    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:36.621089    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:36.621098    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:36.621105    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:36.621111    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:36.621117    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:36.621122    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:36.621128    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:36.621134    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:36.621141    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:36.621158    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:36.621166    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:36.621192    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:36.621210    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:36.621226    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:36.621235    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:36.621249    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:36.621256    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:36.621265    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:38.623266    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 21
	I0917 10:59:38.623281    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:38.623351    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:38.624216    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:38.624287    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:38.624300    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:38.624313    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:38.624321    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:38.624328    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:38.624333    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:38.624350    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:38.624364    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:38.624390    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:38.624398    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:38.624405    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:38.624411    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:38.624417    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:38.624424    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:38.624431    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:38.624438    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:38.624445    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:38.624450    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:40.624656    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 22
	I0917 10:59:40.624669    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:40.624736    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:40.625666    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:40.625723    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:40.625741    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:40.625750    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:40.625757    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:40.625763    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:40.625772    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:40.625784    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:40.625794    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:40.625809    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:40.625821    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:40.625832    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:40.625844    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:40.625852    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:40.625861    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:40.625874    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:40.625883    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:40.625892    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:40.625900    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:42.627914    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 23
	I0917 10:59:42.627926    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:42.628002    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:42.628868    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:42.628925    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:42.628938    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:42.628948    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:42.628961    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:42.628970    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:42.628979    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:42.628988    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:42.628996    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:42.629002    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:42.629009    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:42.629025    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:42.629032    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:42.629038    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:42.629046    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:42.629052    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:42.629059    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:42.629065    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:42.629070    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:44.629802    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 24
	I0917 10:59:44.629814    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:44.629881    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:44.630892    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:44.630939    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:44.630953    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:44.630964    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:44.630974    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:44.630982    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:44.630994    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:44.631001    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:44.631006    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:44.631021    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:44.631032    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:44.631041    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:44.631048    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:44.631054    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:44.631061    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:44.631068    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:44.631073    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:44.631085    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:44.631098    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:46.631106    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 25
	I0917 10:59:46.631118    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:46.631182    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:46.632079    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:46.632143    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:46.632154    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:46.632161    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:46.632166    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:46.632174    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:46.632179    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:46.632185    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:46.632191    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:46.632203    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:46.632217    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:46.632228    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:46.632239    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:46.632247    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:46.632254    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:46.632261    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:46.632268    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:46.632275    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:46.632282    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:48.633017    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 26
	I0917 10:59:48.633032    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:48.633095    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:48.634143    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:48.634204    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:48.634217    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:48.634237    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:48.634244    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:48.634258    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:48.634270    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:48.634286    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:48.634298    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:48.634313    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:48.634322    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:48.634333    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:48.634340    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:48.634346    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:48.634354    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:48.634360    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:48.634368    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:48.634374    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:48.634381    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:50.636402    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 27
	I0917 10:59:50.636417    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:50.636479    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:50.637353    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:50.637403    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:50.637414    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:50.637423    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:50.637428    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:50.637444    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:50.637456    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:50.637464    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:50.637476    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:50.637482    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:50.637496    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:50.637507    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:50.637515    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:50.637525    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:50.637559    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:50.637571    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:50.637578    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:50.637584    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:50.637591    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:52.638925    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 28
	I0917 10:59:52.638941    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:52.639002    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:52.639893    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:52.639988    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:52.639996    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:52.640006    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:52.640012    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:52.640018    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:52.640023    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:52.640029    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:52.640034    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:52.640059    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:52.640072    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:52.640081    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:52.640089    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:52.640096    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:52.640104    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:52.640110    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:52.640116    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:52.640124    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:52.640131    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:54.642190    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Attempt 29
	I0917 10:59:54.642201    6290 main.go:141] libmachine: (docker-flags-702000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:59:54.642270    6290 main.go:141] libmachine: (docker-flags-702000) DBG | hyperkit pid from json: 6347
	I0917 10:59:54.643160    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Searching for c6:63:7c:93:82:8 in /var/db/dhcpd_leases ...
	I0917 10:59:54.643194    6290 main.go:141] libmachine: (docker-flags-702000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:59:54.643206    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:59:54.643215    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:59:54.643221    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:59:54.643227    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:59:54.643234    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:59:54.643242    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:59:54.643249    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:59:54.643254    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:59:54.643261    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:59:54.643269    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:59:54.643275    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:59:54.643280    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:59:54.643288    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:59:54.643295    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:59:54.643301    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:59:54.643307    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:59:54.643323    6290 main.go:141] libmachine: (docker-flags-702000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:59:56.645084    6290 client.go:171] duration metric: took 1m1.348206664s to LocalClient.Create
	I0917 10:59:58.646027    6290 start.go:128] duration metric: took 1m3.401414408s to createHost
	I0917 10:59:58.646040    6290 start.go:83] releasing machines lock for "docker-flags-702000", held for 1m3.401550029s
	W0917 10:59:58.646145    6290 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-702000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:63:7c:93:82:8
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-702000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:63:7c:93:82:8
	I0917 10:59:58.688473    6290 out.go:201] 
	W0917 10:59:58.730686    6290 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:63:7c:93:82:8
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:63:7c:93:82:8
	W0917 10:59:58.730701    6290 out.go:270] * 
	* 
	W0917 10:59:58.731313    6290 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:59:58.793598    6290 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-702000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-702000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-702000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (185.869863ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-702000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-702000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-702000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-702000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (167.750554ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-702000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-702000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-702000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-17 10:59:59.260418 -0700 PDT m=+3901.155979516
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-702000 -n docker-flags-702000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-702000 -n docker-flags-702000: exit status 7 (82.406483ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:59:59.340610    6376 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 10:59:59.340634    6376 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-702000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-702000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-702000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-702000: (5.247695298s)
--- FAIL: TestDockerFlags (252.43s)

                                                
                                    
x
+
TestForceSystemdFlag (252.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-812000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-812000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.506653753s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-812000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-812000" primary control-plane node in "force-systemd-flag-812000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-812000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:54:48.966622    6257 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:54:48.966816    6257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:48.966821    6257 out.go:358] Setting ErrFile to fd 2...
	I0917 10:54:48.966825    6257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:54:48.967028    6257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:54:48.968647    6257 out.go:352] Setting JSON to false
	I0917 10:54:48.991919    6257 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5055,"bootTime":1726590633,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:54:48.992077    6257 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:54:49.014679    6257 out.go:177] * [force-systemd-flag-812000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:54:49.055558    6257 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:54:49.055588    6257 notify.go:220] Checking for updates...
	I0917 10:54:49.097669    6257 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:54:49.118509    6257 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:54:49.139676    6257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:54:49.160656    6257 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:54:49.183481    6257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:54:49.205146    6257 config.go:182] Loaded profile config "force-systemd-env-504000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:54:49.205246    6257 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:54:49.234663    6257 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 10:54:49.276666    6257 start.go:297] selected driver: hyperkit
	I0917 10:54:49.276682    6257 start.go:901] validating driver "hyperkit" against <nil>
	I0917 10:54:49.276692    6257 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:54:49.279687    6257 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:54:49.279821    6257 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:54:49.288241    6257 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:54:49.292169    6257 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:54:49.292188    6257 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:54:49.292222    6257 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:54:49.292433    6257 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 10:54:49.292463    6257 cni.go:84] Creating CNI manager for ""
	I0917 10:54:49.292496    6257 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:54:49.292505    6257 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:54:49.292570    6257 start.go:340] cluster config:
	{Name:force-systemd-flag-812000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:54:49.292657    6257 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:54:49.334442    6257 out.go:177] * Starting "force-systemd-flag-812000" primary control-plane node in "force-systemd-flag-812000" cluster
	I0917 10:54:49.355624    6257 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:54:49.355660    6257 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:54:49.355678    6257 cache.go:56] Caching tarball of preloaded images
	I0917 10:54:49.355791    6257 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:54:49.355800    6257 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:54:49.355876    6257 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/force-systemd-flag-812000/config.json ...
	I0917 10:54:49.355895    6257 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/force-systemd-flag-812000/config.json: {Name:mkee0260dad23185c700cd78e66e9c99abf72490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:54:49.356224    6257 start.go:360] acquireMachinesLock for force-systemd-flag-812000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:55:46.377795    6257 start.go:364] duration metric: took 57.022883883s to acquireMachinesLock for "force-systemd-flag-812000"
	I0917 10:55:46.377837    6257 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:55:46.377892    6257 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 10:55:46.420218    6257 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:55:46.420414    6257 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:55:46.420481    6257 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:55:46.429699    6257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53769
	I0917 10:55:46.430190    6257 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:55:46.430747    6257 main.go:141] libmachine: Using API Version  1
	I0917 10:55:46.430758    6257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:55:46.431039    6257 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:55:46.431155    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .GetMachineName
	I0917 10:55:46.431250    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .DriverName
	I0917 10:55:46.431374    6257 start.go:159] libmachine.API.Create for "force-systemd-flag-812000" (driver="hyperkit")
	I0917 10:55:46.431403    6257 client.go:168] LocalClient.Create starting
	I0917 10:55:46.431432    6257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 10:55:46.431484    6257 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:46.431511    6257 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:46.431603    6257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 10:55:46.431665    6257 main.go:141] libmachine: Decoding PEM data...
	I0917 10:55:46.431677    6257 main.go:141] libmachine: Parsing certificate...
	I0917 10:55:46.431727    6257 main.go:141] libmachine: Running pre-create checks...
	I0917 10:55:46.431733    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .PreCreateCheck
	I0917 10:55:46.431837    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:46.431986    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .GetConfigRaw
	I0917 10:55:46.441689    6257 main.go:141] libmachine: Creating machine...
	I0917 10:55:46.441699    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .Create
	I0917 10:55:46.441827    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:46.441964    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | I0917 10:55:46.441802    6275 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:55:46.441989    6257 main.go:141] libmachine: (force-systemd-flag-812000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 10:55:46.870173    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | I0917 10:55:46.870063    6275 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/id_rsa...
	I0917 10:55:46.959921    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | I0917 10:55:46.959866    6275 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/force-systemd-flag-812000.rawdisk...
	I0917 10:55:46.959936    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Writing magic tar header
	I0917 10:55:46.959952    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Writing SSH key tar header
	I0917 10:55:46.960222    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | I0917 10:55:46.960197    6275 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000 ...
	I0917 10:55:47.323828    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:47.323844    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/hyperkit.pid
	I0917 10:55:47.323872    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Using UUID ed6f5201-0841-4e29-92db-0d0b9baf3bb8
	I0917 10:55:47.348820    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Generated MAC 8e:b3:bc:4a:a7:44
	I0917 10:55:47.348841    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-812000
	I0917 10:55:47.348877    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ed6f5201-0841-4e29-92db-0d0b9baf3bb8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000208630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:55:47.348910    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ed6f5201-0841-4e29-92db-0d0b9baf3bb8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000208630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:55:47.348960    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ed6f5201-0841-4e29-92db-0d0b9baf3bb8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/force-systemd-flag-812000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/fo
rce-systemd-flag-812000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-812000"}
	I0917 10:55:47.348996    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ed6f5201-0841-4e29-92db-0d0b9baf3bb8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/force-systemd-flag-812000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/bzimage,/Users/jenkins/minikube-integr
ation/19662-1558/.minikube/machines/force-systemd-flag-812000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-812000"
	I0917 10:55:47.349005    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:55:47.351947    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 DEBUG: hyperkit: Pid is 6289
	I0917 10:55:47.352435    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 0
	I0917 10:55:47.352449    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:47.352533    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:55:47.353641    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:55:47.353727    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:47.353765    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:47.353781    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:47.353804    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:47.353822    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:47.353837    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:47.353853    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:47.353883    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:47.353912    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:47.353925    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:47.353939    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:47.353959    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:47.353976    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:47.353985    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:47.353992    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:47.354000    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:47.354008    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:47.354024    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:47.359710    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:55:47.367723    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:55:47.368486    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:55:47.368517    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:55:47.368543    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:55:47.368562    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:55:47.743124    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:55:47.743150    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:55:47.857835    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:55:47.857851    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:55:47.857875    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:55:47.857887    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:55:47.858744    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:55:47.858755    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:47 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:55:49.355931    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 1
	I0917 10:55:49.355947    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:49.356023    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:55:49.356995    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:55:49.357034    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:49.357042    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:49.357051    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:49.357058    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:49.357065    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:49.357085    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:49.357093    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:49.357107    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:49.357113    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:49.357122    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:49.357130    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:49.357145    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:49.357158    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:49.357166    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:49.357173    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:49.357188    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:49.357200    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:49.357211    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:51.358266    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 2
	I0917 10:55:51.358281    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:51.358387    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:55:51.359282    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:55:51.359363    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:51.359374    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:51.359383    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:51.359389    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:51.359405    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:51.359419    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:51.359428    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:51.359436    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:51.359443    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:51.359449    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:51.359464    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:51.359477    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:51.359485    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:51.359493    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:51.359508    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:51.359523    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:51.359533    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:51.359541    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:53.237162    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:53 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:55:53.237271    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:53 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:55:53.237280    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:53 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:55:53.257290    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:55:53 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:55:53.359757    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 3
	I0917 10:55:53.359784    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:53.359916    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:55:53.361561    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:55:53.361667    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:53.361711    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:53.361732    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:53.361763    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:53.361781    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:53.361792    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:53.361806    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:53.361817    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:53.361828    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:53.361854    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:53.361866    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:53.361879    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:53.361890    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:53.361907    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:53.361918    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:53.361930    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:53.361938    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:53.361948    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:55.361990    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 4
	I0917 10:55:55.362009    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:55.362101    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:55:55.363003    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:55:55.363058    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:55.363066    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:55.363074    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:55.363080    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:55.363091    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:55.363098    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:55.363106    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:55.363111    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:55.363120    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:55.363130    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:55.363136    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:55.363145    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:55.363152    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:55.363158    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:55.363164    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:55.363172    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:55.363180    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:55.363197    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:57.363478    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 5
	I0917 10:55:57.363492    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:57.363529    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:55:57.364587    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:55:57.364621    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:57.364640    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:57.364653    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:57.364669    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:57.364692    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:57.364700    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:57.364708    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:57.364736    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:57.364754    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:57.364765    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:57.364777    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:57.364787    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:57.364793    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:57.364800    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:57.364806    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:57.364813    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:57.364820    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:57.364835    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:59.366754    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 6
	I0917 10:55:59.366768    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:59.366846    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:55:59.367804    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:55:59.367844    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:59.367861    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:59.367871    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:59.367877    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:59.367884    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:59.367890    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:59.367899    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:59.367905    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:59.367918    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:59.367930    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:59.367943    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:59.367951    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:59.367959    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:59.367965    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:59.367973    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:59.367984    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:59.367992    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:59.368001    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:01.369813    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 7
	I0917 10:56:01.369828    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:01.369875    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:01.370824    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:01.370877    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:01.370885    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:01.370894    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:01.370904    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:01.370925    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:01.370933    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:01.370954    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:01.370999    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:01.371013    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:01.371022    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:01.371029    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:01.371038    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:01.371046    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:01.371052    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:01.371058    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:01.371064    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:01.371087    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:01.371100    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:03.371312    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 8
	I0917 10:56:03.371330    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:03.371386    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:03.372245    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:03.372304    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:03.372314    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:03.372324    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:03.372340    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:03.372358    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:03.372371    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:03.372386    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:03.372401    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:03.372410    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:03.372418    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:03.372430    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:03.372439    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:03.372451    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:03.372459    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:03.372466    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:03.372474    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:03.372486    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:03.372495    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:05.373961    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 9
	I0917 10:56:05.373975    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:05.374048    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:05.374924    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:05.374979    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:05.374987    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:05.375009    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:05.375039    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:05.375045    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:05.375051    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:05.375057    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:05.375066    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:05.375072    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:05.375079    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:05.375098    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:05.375105    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:05.375113    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:05.375121    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:05.375128    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:05.375136    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:05.375143    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:05.375150    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:07.375794    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 10
	I0917 10:56:07.375816    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:07.375877    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:07.376750    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:07.376805    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:07.376819    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:07.376842    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:07.376855    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:07.376863    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:07.376871    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:07.376886    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:07.376900    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:07.376917    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:07.376929    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:07.376943    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:07.376950    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:07.376957    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:07.376965    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:07.376977    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:07.376986    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:07.377004    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:07.377013    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:09.378964    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 11
	I0917 10:56:09.378989    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:09.379037    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:09.379929    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:09.379974    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:09.379987    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:09.379998    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:09.380006    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:09.380012    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:09.380018    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:09.380026    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:09.380032    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:09.380038    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:09.380055    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:09.380068    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:09.380075    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:09.380081    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:09.380088    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:09.380095    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:09.380100    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:09.380108    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:09.380125    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:11.380521    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 12
	I0917 10:56:11.380536    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:11.380660    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:11.381608    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:11.381653    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:11.381664    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:11.381675    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:11.381685    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:11.381693    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:11.381698    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:11.381704    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:11.381711    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:11.381720    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:11.381729    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:11.381746    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:11.381809    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:11.381837    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:11.381855    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:11.381868    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:11.381876    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:11.381892    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:11.381902    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:13.383404    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 13
	I0917 10:56:13.383419    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:13.383468    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:13.384350    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:13.384419    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:13.384433    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:13.384443    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:13.384448    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:13.384456    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:13.384464    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:13.384470    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:13.384478    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:13.384485    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:13.384491    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:13.384497    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:13.384506    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:13.384518    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:13.384530    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:13.384538    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:13.384546    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:13.384552    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:13.384560    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:15.385188    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 14
	I0917 10:56:15.385203    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:15.385264    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:15.386140    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:15.386198    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:15.386210    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:15.386217    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:15.386227    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:15.386237    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:15.386247    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:15.386255    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:15.386263    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:15.386268    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:15.386275    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:15.386281    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:15.386287    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:15.386295    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:15.386303    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:15.386316    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:15.386328    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:15.386345    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:15.386357    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:17.388363    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 15
	I0917 10:56:17.388379    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:17.388414    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:17.389295    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:17.389333    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:17.389343    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:17.389364    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:17.389379    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:17.389402    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:17.389416    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:17.389424    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:17.389438    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:17.389446    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:17.389460    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:17.389468    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:17.389476    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:17.389485    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:17.389499    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:17.389508    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:17.389515    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:17.389523    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:17.389540    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:19.391538    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 16
	I0917 10:56:19.391551    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:19.391618    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:19.392509    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:19.392544    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:19.392551    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:19.392561    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:19.392569    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:19.392579    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:19.392587    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:19.392593    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:19.392600    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:19.392618    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:19.392629    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:19.392646    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:19.392658    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:19.392670    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:19.392678    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:19.392685    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:19.392711    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:19.392745    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:19.392753    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:21.394775    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 17
	I0917 10:56:21.394791    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:21.394868    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:21.395737    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:21.395788    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:21.395801    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:21.395808    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:21.395813    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:21.395840    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:21.395854    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:21.395861    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:21.395869    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:21.395883    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:21.395893    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:21.395901    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:21.395907    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:21.395913    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:21.395921    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:21.395928    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:21.395934    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:21.395941    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:21.395949    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:23.397991    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 18
	I0917 10:56:23.398005    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:23.398014    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:23.398890    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:23.398943    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:23.398953    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:23.398981    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:23.398997    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:23.399003    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:23.399012    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:23.399018    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:23.399038    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:23.399050    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:23.399059    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:23.399067    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:23.399081    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:23.399093    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:23.399102    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:23.399107    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:23.399123    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:23.399135    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:23.399144    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:25.400802    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 19
	I0917 10:56:25.400817    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:25.400836    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:25.401751    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:25.401796    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:25.401805    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:25.401840    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:25.401849    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:25.401871    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:25.401886    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:25.401894    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:25.401902    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:25.401917    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:25.401930    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:25.401946    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:25.401955    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:25.401963    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:25.401969    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:25.401975    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:25.401991    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:25.401998    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:25.402005    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:27.402776    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 20
	I0917 10:56:27.402788    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:27.402830    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:27.403721    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:27.403780    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:27.403792    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:27.403816    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:27.403847    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:27.403859    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:27.403868    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:27.403879    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:27.403886    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:27.403911    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:27.403918    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:27.403924    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:27.403932    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:27.403939    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:27.403947    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:27.403954    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:27.403965    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:27.403974    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:27.403982    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:29.406039    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 21
	I0917 10:56:29.406066    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:29.406105    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:29.407300    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:29.407343    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:29.407359    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:29.407370    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:29.407378    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:29.407384    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:29.407399    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:29.407410    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:29.407419    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:29.407426    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:29.407434    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:29.407444    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:29.407450    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:29.407456    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:29.407464    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:29.407479    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:29.407490    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:29.407499    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:29.407507    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:31.407744    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 22
	I0917 10:56:31.407759    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:31.407769    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:31.408838    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:31.408892    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:31.408907    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:31.408936    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:31.408946    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:31.408959    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:31.408969    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:31.408976    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:31.408983    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:31.408990    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:31.409000    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:31.409008    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:31.409017    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:31.409024    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:31.409032    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:31.409040    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:31.409047    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:31.409071    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:31.409083    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:33.410396    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 23
	I0917 10:56:33.410410    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:33.410420    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:33.411301    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:33.411326    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:33.411340    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:33.411351    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:33.411356    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:33.411363    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:33.411369    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:33.411385    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:33.411399    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:33.411411    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:33.411420    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:33.411427    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:33.411438    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:33.411449    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:33.411460    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:33.411469    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:33.411476    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:33.411484    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:33.411492    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:35.413552    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 24
	I0917 10:56:35.413568    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:35.413630    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:35.414547    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:35.414581    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:35.414589    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:35.414607    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:35.414624    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:35.414631    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:35.414638    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:35.414644    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:35.414650    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:35.414674    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:35.414686    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:35.414696    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:35.414705    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:35.414720    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:35.414734    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:35.414748    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:35.414761    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:35.414768    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:35.414775    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:37.415897    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 25
	I0917 10:56:37.415912    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:37.415969    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:37.416855    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:37.416901    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:37.416913    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:37.416933    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:37.416940    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:37.416948    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:37.416956    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:37.416963    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:37.416969    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:37.416975    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:37.416982    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:37.416989    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:37.417001    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:37.417007    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:37.417014    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:37.417031    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:37.417042    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:37.417049    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:37.417054    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:39.419087    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 26
	I0917 10:56:39.419100    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:39.419149    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:39.420021    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:39.420067    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:39.420082    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:39.420103    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:39.420112    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:39.420122    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:39.420129    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:39.420135    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:39.420144    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:39.420155    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:39.420162    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:39.420169    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:39.420185    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:39.420197    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:39.420209    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:39.420217    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:39.420225    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:39.420230    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:39.420236    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:41.422275    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 27
	I0917 10:56:41.422290    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:41.422302    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:41.423269    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:41.423324    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:41.423334    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:41.423346    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:41.423356    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:41.423366    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:41.423372    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:41.423379    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:41.423385    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:41.423391    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:41.423396    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:41.423402    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:41.423410    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:41.423417    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:41.423423    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:41.423429    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:41.423435    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:41.423441    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:41.423449    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:43.424021    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 28
	I0917 10:56:43.424036    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:43.424077    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:43.424918    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:43.424986    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:43.424996    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:43.425005    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:43.425014    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:43.425031    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:43.425037    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:43.425043    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:43.425049    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:43.425060    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:43.425068    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:43.425075    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:43.425086    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:43.425093    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:43.425101    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:43.425112    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:43.425123    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:43.425130    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:43.425137    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:45.425327    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 29
	I0917 10:56:45.425351    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:45.425414    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:45.426325    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 8e:b3:bc:4a:a7:44 in /var/db/dhcpd_leases ...
	I0917 10:56:45.426377    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:56:45.426388    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:56:45.426396    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:56:45.426404    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:56:45.426420    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:56:45.426433    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:56:45.426441    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:56:45.426449    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:56:45.426456    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:56:45.426464    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:56:45.426479    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:56:45.426490    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:56:45.426498    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:56:45.426505    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:56:45.426514    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:56:45.426521    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:56:45.426531    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:56:45.426545    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:56:47.427471    6257 client.go:171] duration metric: took 1m0.99626189s to LocalClient.Create
	I0917 10:56:49.428291    6257 start.go:128] duration metric: took 1m3.05058759s to createHost
	I0917 10:56:49.428311    6257 start.go:83] releasing machines lock for "force-systemd-flag-812000", held for 1m3.050713614s
	W0917 10:56:49.428325    6257 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:b3:bc:4a:a7:44
	I0917 10:56:49.428637    6257 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:56:49.428655    6257 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:56:49.438028    6257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53785
	I0917 10:56:49.438454    6257 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:56:49.438898    6257 main.go:141] libmachine: Using API Version  1
	I0917 10:56:49.438932    6257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:56:49.439318    6257 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:56:49.439741    6257 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:56:49.439779    6257 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:56:49.448633    6257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53787
	I0917 10:56:49.448963    6257 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:56:49.449344    6257 main.go:141] libmachine: Using API Version  1
	I0917 10:56:49.449363    6257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:56:49.449597    6257 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:56:49.449701    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .GetState
	I0917 10:56:49.449781    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:49.449865    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:49.450963    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .DriverName
	I0917 10:56:49.471639    6257 out.go:177] * Deleting "force-systemd-flag-812000" in hyperkit ...
	I0917 10:56:49.513595    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .Remove
	I0917 10:56:49.513724    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:49.513733    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:49.513793    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:49.514843    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:49.514903    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | waiting for graceful shutdown
	I0917 10:56:50.516578    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:50.516680    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:50.517751    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | waiting for graceful shutdown
	I0917 10:56:51.518715    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:51.518807    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:51.520436    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | waiting for graceful shutdown
	I0917 10:56:52.522124    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:52.522222    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:52.522865    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | waiting for graceful shutdown
	I0917 10:56:53.524977    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:53.525055    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:53.525700    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | waiting for graceful shutdown
	I0917 10:56:54.525837    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:56:54.525931    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6289
	I0917 10:56:54.526770    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | sending sigkill
	I0917 10:56:54.526783    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0917 10:56:54.539380    6257 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:b3:bc:4a:a7:44
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:b3:bc:4a:a7:44
	I0917 10:56:54.539406    6257 start.go:729] Will try again in 5 seconds ...
	I0917 10:56:54.549709    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:56:54 WARN : hyperkit: failed to read stdout: EOF
	I0917 10:56:54.549727    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:56:54 WARN : hyperkit: failed to read stderr: EOF
	I0917 10:56:59.541445    6257 start.go:360] acquireMachinesLock for force-systemd-flag-812000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:57:52.341343    6257 start.go:364] duration metric: took 52.800003812s to acquireMachinesLock for "force-systemd-flag-812000"
	I0917 10:57:52.341364    6257 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:57:52.341471    6257 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 10:57:52.362871    6257 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:57:52.362958    6257 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:57:52.362982    6257 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:57:52.371688    6257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53795
	I0917 10:57:52.372039    6257 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:57:52.372414    6257 main.go:141] libmachine: Using API Version  1
	I0917 10:57:52.372434    6257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:57:52.372669    6257 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:57:52.372795    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .GetMachineName
	I0917 10:57:52.372894    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .DriverName
	I0917 10:57:52.373017    6257 start.go:159] libmachine.API.Create for "force-systemd-flag-812000" (driver="hyperkit")
	I0917 10:57:52.373034    6257 client.go:168] LocalClient.Create starting
	I0917 10:57:52.373058    6257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 10:57:52.373117    6257 main.go:141] libmachine: Decoding PEM data...
	I0917 10:57:52.373127    6257 main.go:141] libmachine: Parsing certificate...
	I0917 10:57:52.373167    6257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 10:57:52.373203    6257 main.go:141] libmachine: Decoding PEM data...
	I0917 10:57:52.373215    6257 main.go:141] libmachine: Parsing certificate...
	I0917 10:57:52.373227    6257 main.go:141] libmachine: Running pre-create checks...
	I0917 10:57:52.373232    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .PreCreateCheck
	I0917 10:57:52.373311    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:52.373344    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .GetConfigRaw
	I0917 10:57:52.425616    6257 main.go:141] libmachine: Creating machine...
	I0917 10:57:52.425640    6257 main.go:141] libmachine: (force-systemd-flag-812000) Calling .Create
	I0917 10:57:52.425729    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:52.425856    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | I0917 10:57:52.425720    6324 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:57:52.425911    6257 main.go:141] libmachine: (force-systemd-flag-812000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 10:57:52.638946    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | I0917 10:57:52.638850    6324 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/id_rsa...
	I0917 10:57:52.763153    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | I0917 10:57:52.763078    6324 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/force-systemd-flag-812000.rawdisk...
	I0917 10:57:52.763163    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Writing magic tar header
	I0917 10:57:52.763174    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Writing SSH key tar header
	I0917 10:57:52.763749    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | I0917 10:57:52.763702    6324 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000 ...
	I0917 10:57:53.130940    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:53.130958    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/hyperkit.pid
	I0917 10:57:53.131009    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Using UUID 04d2fd37-df5a-49e8-9e71-10df191a94f4
	I0917 10:57:53.155585    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Generated MAC 82:b9:77:71:10:d5
	I0917 10:57:53.155600    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-812000
	I0917 10:57:53.155636    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"04d2fd37-df5a-49e8-9e71-10df191a94f4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:57:53.155664    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"04d2fd37-df5a-49e8-9e71-10df191a94f4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:57:53.155722    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "04d2fd37-df5a-49e8-9e71-10df191a94f4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/force-systemd-flag-812000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/fo
rce-systemd-flag-812000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-812000"}
	I0917 10:57:53.155772    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 04d2fd37-df5a-49e8-9e71-10df191a94f4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/force-systemd-flag-812000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/bzimage,/Users/jenkins/minikube-integr
ation/19662-1558/.minikube/machines/force-systemd-flag-812000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-812000"
	I0917 10:57:53.155781    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:57:53.158563    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 DEBUG: hyperkit: Pid is 6325
	I0917 10:57:53.159004    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 0
	I0917 10:57:53.159019    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:53.159122    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:57:53.160714    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:57:53.160814    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:53.160833    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:53.160879    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:53.160902    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:53.160926    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:53.160941    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:53.160951    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:53.160978    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:53.160992    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:53.161007    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:53.161019    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:53.161030    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:53.161043    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:53.161054    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:53.161065    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:53.161077    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:53.161097    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:53.161111    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:53.166111    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:57:53.174141    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-flag-812000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:57:53.174921    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:57:53.174940    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:57:53.174947    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:57:53.174953    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:57:53.551333    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:57:53.551350    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:57:53.665964    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:57:53.665984    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:57:53.666016    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:57:53.666045    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:57:53.666850    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:57:53.666859    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:57:55.162047    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 1
	I0917 10:57:55.162060    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:55.162181    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:57:55.163110    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:57:55.163162    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:55.163174    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:55.163187    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:55.163194    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:55.163209    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:55.163215    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:55.163221    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:55.163246    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:55.163254    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:55.163265    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:55.163276    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:55.163284    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:55.163291    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:55.163299    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:55.163306    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:55.163313    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:55.163319    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:55.163327    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:57.164885    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 2
	I0917 10:57:57.164902    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:57.164959    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:57:57.165922    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:57:57.165945    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:57.165960    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:57.165980    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:57.165995    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:57.166013    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:57.166019    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:57.166025    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:57.166031    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:57.166038    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:57.166045    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:57.166052    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:57.166058    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:57.166066    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:57.166079    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:57.166086    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:57.166092    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:57.166103    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:57.166116    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:57:59.054898    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:57:59.055047    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:57:59.055058    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:57:59.075279    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | 2024/09/17 10:57:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:57:59.168264    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 3
	I0917 10:57:59.168292    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:57:59.168408    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:57:59.170067    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:57:59.170161    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:57:59.170181    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:57:59.170220    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:57:59.170237    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:57:59.170252    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:57:59.170269    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:57:59.170288    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:57:59.170301    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:57:59.170311    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:57:59.170322    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:57:59.170342    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:57:59.170360    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:57:59.170371    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:57:59.170381    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:57:59.170402    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:57:59.170414    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:57:59.170435    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:57:59.170446    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:01.171379    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 4
	I0917 10:58:01.171393    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:01.171472    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:01.172364    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:01.172434    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:01.172445    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:01.172455    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:01.172463    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:01.172469    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:01.172475    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:01.172499    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:01.172512    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:01.172533    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:01.172542    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:01.172550    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:01.172555    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:01.172561    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:01.172576    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:01.172589    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:01.172596    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:01.172605    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:01.172614    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:03.173642    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 5
	I0917 10:58:03.173654    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:03.173708    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:03.174598    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:03.174657    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:03.174666    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:03.174680    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:03.174693    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:03.174701    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:03.174708    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:03.174714    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:03.174720    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:03.174732    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:03.174748    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:03.174766    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:03.174774    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:03.174792    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:03.174804    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:03.174824    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:03.174841    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:03.174849    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:03.174857    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:05.176839    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 6
	I0917 10:58:05.176852    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:05.176889    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:05.177801    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:05.177837    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:05.177846    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:05.177865    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:05.177877    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:05.177885    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:05.177891    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:05.177904    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:05.177915    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:05.177922    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:05.177931    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:05.177940    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:05.177948    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:05.177955    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:05.177966    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:05.177973    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:05.177978    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:05.177985    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:05.177993    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:07.179984    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 7
	I0917 10:58:07.179996    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:07.180058    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:07.180991    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:07.181034    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:07.181044    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:07.181053    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:07.181059    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:07.181065    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:07.181071    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:07.181078    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:07.181085    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:07.181092    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:07.181100    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:07.181112    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:07.181123    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:07.181139    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:07.181148    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:07.181155    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:07.181162    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:07.181169    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:07.181175    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:09.182822    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 8
	I0917 10:58:09.182836    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:09.182904    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:09.183806    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:09.183843    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:09.183861    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:09.183868    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:09.183874    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:09.183879    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:09.183911    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:09.183921    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:09.183930    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:09.183945    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:09.183957    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:09.183965    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:09.183978    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:09.183986    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:09.183995    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:09.184004    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:09.184011    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:09.184018    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:09.184024    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:11.184159    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 9
	I0917 10:58:11.184175    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:11.184279    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:11.185429    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:11.185459    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:11.185470    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:11.185480    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:11.185485    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:11.185492    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:11.185499    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:11.185528    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:11.185542    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:11.185553    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:11.185566    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:11.185587    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:11.185598    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:11.185614    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:11.185626    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:11.185648    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:11.185659    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:11.185667    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:11.185672    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:13.187641    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 10
	I0917 10:58:13.187670    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:13.187704    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:13.188604    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:13.188648    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:13.188658    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:13.188669    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:13.188678    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:13.188684    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:13.188691    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:13.188710    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:13.188718    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:13.188734    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:13.188747    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:13.188755    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:13.188761    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:13.188769    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:13.188777    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:13.188787    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:13.188795    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:13.188802    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:13.188808    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:15.189937    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 11
	I0917 10:58:15.189952    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:15.189995    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:15.190885    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:15.190944    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:15.190958    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:15.190968    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:15.190974    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:15.190981    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:15.190988    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:15.190994    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:15.191003    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:15.191022    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:15.191037    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:15.191050    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:15.191060    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:15.191068    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:15.191075    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:15.191081    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:15.191095    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:15.191107    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:15.191116    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:17.193151    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 12
	I0917 10:58:17.193166    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:17.193253    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:17.194138    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:17.194184    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:17.194193    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:17.194204    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:17.194211    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:17.194218    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:17.194229    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:17.194236    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:17.194243    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:17.194249    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:17.194255    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:17.194272    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:17.194289    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:17.194303    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:17.194341    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:17.194351    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:17.194373    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:17.194383    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:17.194392    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:19.196409    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 13
	I0917 10:58:19.196420    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:19.196477    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:19.197373    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:19.197422    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:19.197435    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:19.197444    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:19.197449    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:19.197455    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:19.197473    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:19.197481    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:19.197488    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:19.197504    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:19.197512    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:19.197521    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:19.197544    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:19.197552    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:19.197559    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:19.197565    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:19.197571    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:19.197579    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:19.197600    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:21.199594    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 14
	I0917 10:58:21.199609    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:21.199631    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:21.200869    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:21.200923    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:21.200933    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:21.200957    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:21.200969    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:21.200984    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:21.200991    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:21.200999    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:21.201006    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:21.201011    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:21.201026    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:21.201040    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:21.201051    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:21.201109    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:21.201118    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:21.201124    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:21.201136    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:21.201151    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:21.201159    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:23.203109    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 15
	I0917 10:58:23.203124    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:23.203152    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:23.204078    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:23.204122    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:23.204137    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:23.204155    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:23.204164    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:23.204184    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:23.204195    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:23.204202    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:23.204211    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:23.204227    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:23.204234    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:23.204240    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:23.204254    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:23.204266    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:23.204274    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:23.204282    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:23.204297    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:23.204306    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:23.204315    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:25.205823    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 16
	I0917 10:58:25.205839    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:25.205916    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:25.206744    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:25.206783    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:25.206794    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:25.206802    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:25.206813    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:25.206832    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:25.206842    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:25.206849    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:25.206858    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:25.206865    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:25.206871    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:25.206883    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:25.206895    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:25.206903    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:25.206910    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:25.206927    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:25.206939    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:25.206947    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:25.206954    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:27.208956    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 17
	I0917 10:58:27.208971    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:27.209036    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:27.210078    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:27.210133    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:27.210149    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:27.210162    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:27.210175    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:27.210183    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:27.210192    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:27.210198    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:27.210206    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:27.210213    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:27.210221    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:27.210233    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:27.210244    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:27.210251    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:27.210258    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:27.210274    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:27.210287    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:27.210294    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:27.210301    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:29.212332    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 18
	I0917 10:58:29.212347    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:29.212397    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:29.213319    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:29.213370    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:29.213383    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:29.213396    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:29.213405    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:29.213424    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:29.213436    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:29.213451    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:29.213463    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:29.213484    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:29.213497    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:29.213510    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:29.213520    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:29.213527    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:29.213533    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:29.213549    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:29.213560    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:29.213577    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:29.213587    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:31.214435    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 19
	I0917 10:58:31.214447    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:31.214497    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:31.215388    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:31.215440    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:31.215454    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:31.215463    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:31.215472    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:31.215489    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:31.215497    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:31.215508    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:31.215516    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:31.215525    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:31.215535    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:31.215549    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:31.215564    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:31.215572    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:31.215581    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:31.215593    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:31.215600    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:31.215607    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:31.215614    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:33.217602    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 20
	I0917 10:58:33.217617    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:33.217725    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:33.218655    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:33.218701    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:33.218713    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:33.218722    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:33.218740    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:33.218753    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:33.218764    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:33.218776    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:33.218785    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:33.218791    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:33.218798    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:33.218804    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:33.218811    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:33.218820    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:33.218835    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:33.218869    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:33.218880    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:33.218888    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:33.218894    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:35.220555    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 21
	I0917 10:58:35.220570    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:35.220640    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:35.221622    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:35.221666    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:35.221676    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:35.221687    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:35.221697    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:35.221708    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:35.221714    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:35.221720    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:35.221728    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:35.221737    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:35.221745    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:35.221754    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:35.221760    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:35.221767    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:35.221772    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:35.221778    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:35.221783    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:35.221793    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:35.221801    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:37.223862    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 22
	I0917 10:58:37.223874    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:37.223929    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:37.224814    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:37.224875    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:37.224883    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:37.224889    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:37.224896    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:37.224910    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:37.224921    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:37.224942    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:37.224955    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:37.224963    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:37.224969    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:37.224985    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:37.224993    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:37.224999    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:37.225007    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:37.225014    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:37.225026    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:37.225034    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:37.225041    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:39.227057    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 23
	I0917 10:58:39.227079    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:39.227119    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:39.227999    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:39.228046    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:39.228057    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:39.228077    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:39.228084    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:39.228090    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:39.228096    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:39.228102    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:39.228110    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:39.228117    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:39.228125    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:39.228136    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:39.228149    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:39.228161    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:39.228176    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:39.228188    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:39.228196    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:39.228205    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:39.228219    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:41.229292    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 24
	I0917 10:58:41.229307    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:41.229375    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:41.230284    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:41.230324    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:41.230334    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:41.230343    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:41.230350    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:41.230358    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:41.230363    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:41.230370    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:41.230376    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:41.230382    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:41.230388    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:41.230395    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:41.230403    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:41.230418    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:41.230428    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:41.230437    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:41.230443    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:41.230457    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:41.230469    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:43.232474    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 25
	I0917 10:58:43.232486    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:43.232656    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:43.233511    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:43.233557    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:43.233568    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:43.233582    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:43.233590    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:43.233597    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:43.233604    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:43.233612    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:43.233618    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:43.233626    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:43.233633    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:43.233640    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:43.233658    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:43.233670    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:43.233678    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:43.233690    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:43.233699    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:43.233706    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:43.233716    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:45.235706    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 26
	I0917 10:58:45.235718    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:45.235783    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:45.236646    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:45.236697    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:45.236709    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:45.236717    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:45.236724    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:45.236730    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:45.236737    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:45.236744    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:45.236749    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:45.236763    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:45.236773    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:45.236790    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:45.236798    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:45.236814    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:45.236822    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:45.236829    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:45.236837    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:45.236848    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:45.236856    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:47.237121    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 27
	I0917 10:58:47.237141    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:47.237212    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:47.238060    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:47.238085    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:47.238100    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:47.238113    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:47.238121    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:47.238128    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:47.238134    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:47.238141    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:47.238149    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:47.238163    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:47.238173    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:47.238180    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:47.238188    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:47.238203    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:47.238214    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:47.238224    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:47.238229    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:47.238238    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:47.238247    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:49.238323    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 28
	I0917 10:58:49.238751    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:49.238801    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:49.239339    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:49.239393    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:49.239402    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:49.239416    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:49.239424    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:49.239433    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:49.239440    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:49.239447    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:49.239455    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:49.239462    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:49.239468    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:49.239546    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:49.239571    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:49.239637    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:49.239662    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:49.239714    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:49.239894    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:49.239923    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:49.239935    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:51.240481    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Attempt 29
	I0917 10:58:51.240497    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:58:51.240587    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | hyperkit pid from json: 6325
	I0917 10:58:51.241458    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Searching for 82:b9:77:71:10:d5 in /var/db/dhcpd_leases ...
	I0917 10:58:51.241497    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:58:51.241505    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:58:51.241515    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:58:51.241527    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:58:51.241537    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:58:51.241548    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:58:51.241555    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:58:51.241561    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:58:51.241566    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:58:51.241573    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:58:51.241582    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:58:51.241594    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:58:51.241607    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:58:51.241625    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:58:51.241636    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:58:51.241652    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:58:51.241665    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:58:51.241676    6257 main.go:141] libmachine: (force-systemd-flag-812000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:58:53.242468    6257 client.go:171] duration metric: took 1m0.869600153s to LocalClient.Create
	I0917 10:58:55.244579    6257 start.go:128] duration metric: took 1m2.903279381s to createHost
	I0917 10:58:55.244595    6257 start.go:83] releasing machines lock for "force-systemd-flag-812000", held for 1m2.903419341s
	W0917 10:58:55.244660    6257 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-812000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:b9:77:71:10:d5
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-812000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:b9:77:71:10:d5
	I0917 10:58:55.287046    6257 out.go:201] 
	W0917 10:58:55.307882    6257 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:b9:77:71:10:d5
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:b9:77:71:10:d5
	W0917 10:58:55.307896    6257 out.go:270] * 
	* 
	W0917 10:58:55.308540    6257 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:58:55.370863    6257 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-812000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-812000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-812000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (175.93204ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-812000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-812000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-17 10:58:55.665215 -0700 PDT m=+3837.560595048
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-812000 -n force-systemd-flag-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-812000 -n force-systemd-flag-812000: exit status 7 (81.037959ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:58:55.744310    6338 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 10:58:55.744334    6338 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-812000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-812000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-812000
E0917 10:58:58.524704    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:59:00.875496    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-812000: (5.252967568s)
--- FAIL: TestForceSystemdFlag (252.09s)

                                                
                                    
x
+
TestForceSystemdEnv (233.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-504000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0917 10:53:58.541184    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:54:23.067814    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-504000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.369614649s)

                                                
                                                
-- stdout --
	* [force-systemd-env-504000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-504000" primary control-plane node in "force-systemd-env-504000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-504000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:51:58.265511    6200 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:51:58.265673    6200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:51:58.265679    6200 out.go:358] Setting ErrFile to fd 2...
	I0917 10:51:58.265683    6200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:51:58.265851    6200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:51:58.267312    6200 out.go:352] Setting JSON to false
	I0917 10:51:58.290033    6200 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4885,"bootTime":1726590633,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:51:58.290201    6200 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:51:58.357452    6200 out.go:177] * [force-systemd-env-504000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:51:58.399460    6200 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:51:58.399465    6200 notify.go:220] Checking for updates...
	I0917 10:51:58.443204    6200 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:51:58.464506    6200 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:51:58.485502    6200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:51:58.506254    6200 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:51:58.527440    6200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0917 10:51:58.548935    6200 config.go:182] Loaded profile config "offline-docker-248000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:51:58.549017    6200 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:51:58.577511    6200 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 10:51:58.621488    6200 start.go:297] selected driver: hyperkit
	I0917 10:51:58.621500    6200 start.go:901] validating driver "hyperkit" against <nil>
	I0917 10:51:58.621509    6200 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:51:58.624308    6200 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:51:58.624444    6200 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:51:58.632749    6200 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:51:58.636488    6200 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:51:58.636519    6200 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:51:58.636562    6200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 10:51:58.636785    6200 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 10:51:58.636814    6200 cni.go:84] Creating CNI manager for ""
	I0917 10:51:58.636854    6200 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:51:58.636862    6200 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 10:51:58.636923    6200 start.go:340] cluster config:
	{Name:force-systemd-env-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:51:58.637005    6200 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:51:58.684476    6200 out.go:177] * Starting "force-systemd-env-504000" primary control-plane node in "force-systemd-env-504000" cluster
	I0917 10:51:58.705313    6200 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:51:58.705337    6200 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:51:58.705353    6200 cache.go:56] Caching tarball of preloaded images
	I0917 10:51:58.705439    6200 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:51:58.705447    6200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:51:58.705519    6200 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/force-systemd-env-504000/config.json ...
	I0917 10:51:58.705536    6200 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/force-systemd-env-504000/config.json: {Name:mke706fbbc41cd2997b4dcf679ae1152bcf9b84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:51:58.705865    6200 start.go:360] acquireMachinesLock for force-systemd-env-504000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:52:37.278530    6200 start.go:364] duration metric: took 38.572455087s to acquireMachinesLock for "force-systemd-env-504000"
	I0917 10:52:37.278569    6200 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:52:37.278621    6200 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 10:52:37.300163    6200 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:52:37.300354    6200 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:52:37.300402    6200 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:52:37.309780    6200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53749
	I0917 10:52:37.310126    6200 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:52:37.310513    6200 main.go:141] libmachine: Using API Version  1
	I0917 10:52:37.310523    6200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:52:37.310755    6200 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:52:37.310856    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .GetMachineName
	I0917 10:52:37.310938    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .DriverName
	I0917 10:52:37.311043    6200 start.go:159] libmachine.API.Create for "force-systemd-env-504000" (driver="hyperkit")
	I0917 10:52:37.311071    6200 client.go:168] LocalClient.Create starting
	I0917 10:52:37.311102    6200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 10:52:37.311154    6200 main.go:141] libmachine: Decoding PEM data...
	I0917 10:52:37.311169    6200 main.go:141] libmachine: Parsing certificate...
	I0917 10:52:37.311234    6200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 10:52:37.311274    6200 main.go:141] libmachine: Decoding PEM data...
	I0917 10:52:37.311285    6200 main.go:141] libmachine: Parsing certificate...
	I0917 10:52:37.311299    6200 main.go:141] libmachine: Running pre-create checks...
	I0917 10:52:37.311307    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .PreCreateCheck
	I0917 10:52:37.311376    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:37.311540    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .GetConfigRaw
	I0917 10:52:37.343065    6200 main.go:141] libmachine: Creating machine...
	I0917 10:52:37.343074    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .Create
	I0917 10:52:37.343192    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:37.343336    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | I0917 10:52:37.343175    6217 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:52:37.343396    6200 main.go:141] libmachine: (force-systemd-env-504000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 10:52:37.561695    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | I0917 10:52:37.561589    6217 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/id_rsa...
	I0917 10:52:37.873771    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | I0917 10:52:37.873680    6217 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/force-systemd-env-504000.rawdisk...
	I0917 10:52:37.873784    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Writing magic tar header
	I0917 10:52:37.873806    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Writing SSH key tar header
	I0917 10:52:37.874370    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | I0917 10:52:37.874313    6217 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000 ...
	I0917 10:52:38.239360    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:38.239382    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/hyperkit.pid
	I0917 10:52:38.239427    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Using UUID 511e43b7-0672-4a59-9afd-8121a1a7976c
	I0917 10:52:38.264247    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Generated MAC c2:87:2f:ef:63:62
	I0917 10:52:38.264271    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-504000
	I0917 10:52:38.264304    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"511e43b7-0672-4a59-9afd-8121a1a7976c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0005961b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:52:38.264334    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"511e43b7-0672-4a59-9afd-8121a1a7976c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0005961b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:52:38.264377    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "511e43b7-0672-4a59-9afd-8121a1a7976c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/force-systemd-env-504000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-sys
temd-env-504000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-504000"}
	I0917 10:52:38.264425    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 511e43b7-0672-4a59-9afd-8121a1a7976c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/force-systemd-env-504000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/bzimage,/Users/jenkins/minikube-integration/19
662-1558/.minikube/machines/force-systemd-env-504000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-504000"
	I0917 10:52:38.264441    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:52:38.267363    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 DEBUG: hyperkit: Pid is 6218
	I0917 10:52:38.267894    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 0
	I0917 10:52:38.267911    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:38.268039    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:38.269357    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:38.269445    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:38.269467    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:38.269483    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:38.269491    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:38.269510    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:38.269529    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:38.269555    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:38.269568    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:38.269576    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:38.269581    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:38.269658    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:38.269679    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:38.269688    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:38.269695    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:38.269708    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:38.269725    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:38.269734    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:38.269756    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:38.275413    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:52:38.418037    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:52:38.418969    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:52:38.418990    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:52:38.419043    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:52:38.419062    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:52:38.796697    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:52:38.796712    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:52:38.911824    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:52:38.911842    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:52:38.911854    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:52:38.911873    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:52:38.912729    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:52:38.912740    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:52:40.270875    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 1
	I0917 10:52:40.270889    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:40.270999    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:40.271878    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:40.271927    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:40.271957    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:40.271966    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:40.271976    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:40.271985    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:40.271994    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:40.272000    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:40.272006    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:40.272013    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:40.272029    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:40.272043    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:40.272058    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:40.272078    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:40.272093    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:40.272105    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:40.272113    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:40.272119    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:40.272130    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:42.273554    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 2
	I0917 10:52:42.273573    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:42.273617    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:42.274640    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:42.274694    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:42.274708    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:42.274730    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:42.274740    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:42.274746    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:42.274754    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:42.274777    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:42.274789    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:42.274805    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:42.274819    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:42.274826    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:42.274834    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:42.274841    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:42.274853    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:42.274860    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:42.274867    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:42.274874    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:42.274883    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:44.275992    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 3
	I0917 10:52:44.276005    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:44.276097    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:44.276986    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:44.277062    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:44.277072    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:44.277081    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:44.277090    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:44.277097    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:44.277103    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:44.277113    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:44.277121    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:44.277134    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:44.277141    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:44.277160    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:44.277172    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:44.277182    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:44.277189    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:44.277206    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:44.277213    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:44.277221    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:44.277231    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:44.288624    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:52:44.288771    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:52:44.288780    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:52:44.309384    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:52:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:52:46.277406    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 4
	I0917 10:52:46.277420    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:46.277509    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:46.278393    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:46.278445    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:46.278456    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:46.278479    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:46.278490    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:46.278496    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:46.278503    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:46.278510    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:46.278517    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:46.278523    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:46.278532    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:46.278539    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:46.278546    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:46.278559    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:46.278573    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:46.278579    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:46.278593    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:46.278603    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:46.278611    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:48.280630    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 5
	I0917 10:52:48.280645    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:48.280716    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:48.281569    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:48.281639    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:48.281649    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:48.281658    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:48.281664    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:48.281672    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:48.281679    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:48.281685    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:48.281691    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:48.281696    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:48.281703    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:48.281711    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:48.281728    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:48.281738    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:48.281746    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:48.281754    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:48.281761    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:48.281769    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:48.281784    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:50.283837    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 6
	I0917 10:52:50.283851    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:50.283917    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:50.284822    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:50.284874    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:50.284884    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:50.284903    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:50.284910    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:50.284918    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:50.284926    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:50.284933    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:50.284940    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:50.284947    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:50.284952    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:50.284958    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:50.284968    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:50.284977    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:50.284984    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:50.285003    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:50.285015    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:50.285023    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:50.285031    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:52.285193    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 7
	I0917 10:52:52.285209    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:52.285274    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:52.286235    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:52.286283    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:52.286296    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:52.286315    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:52.286321    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:52.286327    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:52.286335    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:52.286350    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:52.286361    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:52.286369    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:52.286377    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:52.286385    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:52.286400    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:52.286421    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:52.286435    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:52.286453    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:52.286469    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:52.286480    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:52.286489    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:54.287968    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 8
	I0917 10:52:54.287985    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:54.288047    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:54.288958    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:54.289018    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:54.289028    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:54.289035    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:54.289043    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:54.289052    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:54.289058    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:54.289065    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:54.289070    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:54.289076    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:54.289082    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:54.289114    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:54.289131    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:54.289143    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:54.289151    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:54.289156    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:54.289170    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:54.289181    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:54.289188    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:56.291232    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 9
	I0917 10:52:56.291248    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:56.291289    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:56.292238    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:56.292280    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:56.292292    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:56.292301    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:56.292307    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:56.292314    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:56.292321    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:56.292328    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:56.292334    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:56.292349    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:56.292357    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:56.292370    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:56.292381    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:56.292390    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:56.292398    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:56.292413    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:56.292423    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:56.292439    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:56.292452    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:52:58.292614    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 10
	I0917 10:52:58.292628    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:52:58.292716    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:52:58.293675    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:52:58.293728    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:52:58.293735    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:52:58.293765    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:52:58.293775    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:52:58.293786    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:52:58.293791    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:52:58.293799    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:52:58.293813    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:52:58.293822    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:52:58.293829    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:52:58.293837    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:52:58.293856    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:52:58.293870    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:52:58.293885    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:52:58.293897    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:52:58.293911    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:52:58.293931    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:52:58.293946    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:00.294037    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 11
	I0917 10:53:00.294090    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:00.294098    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:00.295094    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:00.295147    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:00.295157    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:00.295168    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:00.295174    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:00.295180    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:00.295186    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:00.295200    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:00.295211    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:00.295219    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:00.295229    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:00.295246    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:00.295257    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:00.295278    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:00.295289    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:00.295305    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:00.295318    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:00.295333    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:00.295342    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:02.295383    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 12
	I0917 10:53:02.295396    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:02.295478    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:02.296302    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:02.296346    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:02.296366    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:02.296376    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:02.296386    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:02.296395    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:02.296405    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:02.296413    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:02.296420    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:02.296426    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:02.296440    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:02.296451    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:02.296459    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:02.296465    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:02.296478    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:02.296490    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:02.296504    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:02.296515    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:02.296524    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:04.298539    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 13
	I0917 10:53:04.298552    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:04.298611    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:04.299518    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:04.299593    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:04.299627    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:04.299634    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:04.299641    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:04.299658    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:04.299670    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:04.299677    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:04.299685    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:04.299700    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:04.299714    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:04.299729    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:04.299738    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:04.299747    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:04.299769    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:04.299779    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:04.299786    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:04.299799    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:04.299807    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:06.301828    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 14
	I0917 10:53:06.301840    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:06.301884    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:06.302822    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:06.302885    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:06.302897    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:06.302922    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:06.302929    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:06.302935    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:06.302942    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:06.302948    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:06.302954    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:06.302960    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:06.302968    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:06.302974    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:06.302980    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:06.302997    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:06.303008    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:06.303033    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:06.303044    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:06.303052    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:06.303060    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:08.304451    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 15
	I0917 10:53:08.304465    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:08.304528    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:08.305415    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:08.305453    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:08.305477    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:08.305486    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:08.305505    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:08.305511    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:08.305518    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:08.305524    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:08.305531    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:08.305536    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:08.305542    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:08.305548    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:08.305557    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:08.305566    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:08.305580    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:08.305593    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:08.305601    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:08.305606    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:08.305621    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:10.306592    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 16
	I0917 10:53:10.306604    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:10.306675    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:10.307645    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:10.307683    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:10.307692    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:10.307702    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:10.307710    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:10.307716    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:10.307721    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:10.307733    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:10.307743    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:10.307750    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:10.307756    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:10.307773    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:10.307780    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:10.307786    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:10.307794    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:10.307801    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:10.307808    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:10.307824    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:10.307836    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:12.308909    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 17
	I0917 10:53:12.308928    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:12.308972    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:12.310110    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:12.310175    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:12.310184    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:12.310199    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:12.310210    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:12.310217    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:12.310223    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:12.310238    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:12.310250    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:12.310258    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:12.310271    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:12.310284    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:12.310292    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:12.310299    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:12.310306    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:12.310312    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:12.310318    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:12.310324    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:12.310330    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:14.311418    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 18
	I0917 10:53:14.311432    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:14.311491    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:14.312387    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:14.312447    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:14.312456    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:14.312474    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:14.312482    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:14.312488    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:14.312494    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:14.312503    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:14.312510    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:14.312530    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:14.312541    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:14.312548    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:14.312553    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:14.312571    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:14.312583    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:14.312594    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:14.312602    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:14.312609    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:14.312615    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:16.313542    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 19
	I0917 10:53:16.313557    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:16.313602    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:16.314483    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:16.314538    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:16.314549    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:16.314560    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:16.314567    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:16.314574    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:16.314579    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:16.314586    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:16.314594    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:16.314612    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:16.314624    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:16.314631    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:16.314638    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:16.314646    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:16.314654    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:16.314665    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:16.314678    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:16.314691    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:16.314704    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:18.316712    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 20
	I0917 10:53:18.316725    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:18.316804    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:18.317670    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:18.317727    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:18.317739    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:18.317754    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:18.317765    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:18.317773    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:18.317792    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:18.317808    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:18.317818    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:18.317825    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:18.317844    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:18.317852    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:18.317860    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:18.317871    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:18.317889    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:18.317899    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:18.317906    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:18.317922    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:18.317935    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:20.319955    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 21
	I0917 10:53:20.319967    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:20.320031    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:20.321161    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:20.321208    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:20.321217    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:20.321227    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:20.321236    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:20.321244    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:20.321260    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:20.321269    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:20.321276    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:20.321285    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:20.321296    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:20.321306    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:20.321324    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:20.321336    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:20.321344    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:20.321351    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:20.321358    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:20.321364    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:20.321369    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:22.323222    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 22
	I0917 10:53:22.323234    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:22.323305    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:22.324192    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:22.324239    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:22.324254    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:22.324271    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:22.324292    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:22.324303    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:22.324310    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:22.324324    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:22.324334    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:22.324358    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:22.324367    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:22.324373    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:22.324381    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:22.324387    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:22.324395    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:22.324402    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:22.324409    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:22.324419    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:22.324427    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:24.326473    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 23
	I0917 10:53:24.326491    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:24.326532    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:24.327697    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:24.327732    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:24.327751    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:24.327760    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:24.327765    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:24.327776    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:24.327788    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:24.327803    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:24.327814    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:24.327831    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:24.327840    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:24.327847    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:24.327852    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:24.327859    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:24.327867    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:24.327875    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:24.327894    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:24.327904    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:24.327914    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:26.329584    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 24
	I0917 10:53:26.329595    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:26.329674    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:26.330582    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:26.330615    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:26.330623    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:26.330633    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:26.330645    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:26.330658    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:26.330665    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:26.330671    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:26.330680    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:26.330686    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:26.330693    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:26.330707    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:26.330721    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:26.330731    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:26.330739    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:26.330746    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:26.330754    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:26.330760    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:26.330766    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:28.332274    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 25
	I0917 10:53:28.332286    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:28.332360    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:28.333238    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:28.333284    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:28.333292    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:28.333303    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:28.333309    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:28.333322    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:28.333338    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:28.333346    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:28.333353    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:28.333360    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:28.333369    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:28.333379    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:28.333387    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:28.333393    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:28.333399    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:28.333405    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:28.333412    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:28.333426    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:28.333433    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:30.335478    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 26
	I0917 10:53:30.335489    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:30.335570    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:30.336588    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:30.336611    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:30.336624    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:30.336631    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:30.336646    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:30.336655    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:30.336661    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:30.336668    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:30.336676    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:30.336683    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:30.336690    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:30.336703    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:30.336714    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:30.336721    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:30.336728    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:30.336735    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:30.336742    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:30.336749    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:30.336755    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:32.338440    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 27
	I0917 10:53:32.338450    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:32.338484    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:32.339400    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:32.339429    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:32.339446    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:32.339458    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:32.339464    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:32.339473    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:32.339481    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:32.339491    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:32.339499    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:32.339506    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:32.339514    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:32.339520    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:32.339526    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:32.339537    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:32.339552    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:32.339560    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:32.339567    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:32.339583    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:32.339599    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:34.339632    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 28
	I0917 10:53:34.339643    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:34.339705    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:34.340646    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:34.340654    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:34.340662    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:34.340668    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:34.340678    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:34.340683    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:34.340695    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:34.340701    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:34.340717    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:34.340729    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:34.340745    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:34.340754    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:34.340763    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:34.340771    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:34.340779    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:34.340787    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:34.340793    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:34.340801    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:34.340809    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:36.342070    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 29
	I0917 10:53:36.342084    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:36.342174    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:36.343024    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for c2:87:2f:ef:63:62 in /var/db/dhcpd_leases ...
	I0917 10:53:36.343097    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:53:36.343107    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:53:36.343116    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:53:36.343121    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:53:36.343148    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:53:36.343166    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:53:36.343175    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:53:36.343183    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:53:36.343189    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:53:36.343196    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:53:36.343204    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:53:36.343219    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:53:36.343230    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:53:36.343238    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:53:36.343249    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:53:36.343256    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:53:36.343264    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:53:36.343295    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:53:38.344329    6200 client.go:171] duration metric: took 1m1.032944476s to LocalClient.Create
	I0917 10:53:40.344757    6200 start.go:128] duration metric: took 1m3.065797186s to createHost
	I0917 10:53:40.344789    6200 start.go:83] releasing machines lock for "force-systemd-env-504000", held for 1m3.065935049s
	W0917 10:53:40.344818    6200 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:87:2f:ef:63:62
	I0917 10:53:40.345194    6200 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:53:40.345220    6200 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:53:40.354249    6200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53751
	I0917 10:53:40.354754    6200 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:53:40.355233    6200 main.go:141] libmachine: Using API Version  1
	I0917 10:53:40.355262    6200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:53:40.355576    6200 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:53:40.356027    6200 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:53:40.356049    6200 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:53:40.364697    6200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53753
	I0917 10:53:40.365038    6200 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:53:40.365376    6200 main.go:141] libmachine: Using API Version  1
	I0917 10:53:40.365391    6200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:53:40.365613    6200 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:53:40.365756    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .GetState
	I0917 10:53:40.365850    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:40.365919    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:40.367042    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .DriverName
	I0917 10:53:40.388168    6200 out.go:177] * Deleting "force-systemd-env-504000" in hyperkit ...
	I0917 10:53:40.430251    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .Remove
	I0917 10:53:40.430373    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:40.430383    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:40.430450    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:40.431494    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:40.431564    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | waiting for graceful shutdown
	I0917 10:53:41.433384    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:41.433466    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:41.434558    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | waiting for graceful shutdown
	I0917 10:53:42.435001    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:42.435149    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:42.436868    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | waiting for graceful shutdown
	I0917 10:53:43.438067    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:43.438162    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:43.438798    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | waiting for graceful shutdown
	I0917 10:53:44.440950    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:44.440968    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:44.441623    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | waiting for graceful shutdown
	I0917 10:53:45.442119    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:45.442478    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6218
	I0917 10:53:45.443158    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | sending sigkill
	I0917 10:53:45.443168    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:53:45.453022    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:53:45 WARN : hyperkit: failed to read stdout: EOF
	I0917 10:53:45.453040    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:53:45 WARN : hyperkit: failed to read stderr: EOF
	W0917 10:53:45.475449    6200 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:87:2f:ef:63:62
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:87:2f:ef:63:62
	I0917 10:53:45.475464    6200 start.go:729] Will try again in 5 seconds ...
	I0917 10:53:50.477538    6200 start.go:360] acquireMachinesLock for force-systemd-env-504000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:54:43.244651    6200 start.go:364] duration metric: took 52.779232691s to acquireMachinesLock for "force-systemd-env-504000"
	I0917 10:54:43.244686    6200 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:54:43.244744    6200 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 10:54:43.286996    6200 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 10:54:43.287092    6200 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:54:43.287118    6200 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:54:43.295861    6200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53757
	I0917 10:54:43.296243    6200 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:54:43.296600    6200 main.go:141] libmachine: Using API Version  1
	I0917 10:54:43.296616    6200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:54:43.296847    6200 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:54:43.296971    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .GetMachineName
	I0917 10:54:43.297052    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .DriverName
	I0917 10:54:43.297174    6200 start.go:159] libmachine.API.Create for "force-systemd-env-504000" (driver="hyperkit")
	I0917 10:54:43.297190    6200 client.go:168] LocalClient.Create starting
	I0917 10:54:43.297216    6200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem
	I0917 10:54:43.297266    6200 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:43.297277    6200 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:43.297317    6200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem
	I0917 10:54:43.297354    6200 main.go:141] libmachine: Decoding PEM data...
	I0917 10:54:43.297366    6200 main.go:141] libmachine: Parsing certificate...
	I0917 10:54:43.297377    6200 main.go:141] libmachine: Running pre-create checks...
	I0917 10:54:43.297381    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .PreCreateCheck
	I0917 10:54:43.297469    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:43.297503    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .GetConfigRaw
	I0917 10:54:43.329007    6200 main.go:141] libmachine: Creating machine...
	I0917 10:54:43.329016    6200 main.go:141] libmachine: (force-systemd-env-504000) Calling .Create
	I0917 10:54:43.329129    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:43.329266    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | I0917 10:54:43.329134    6246 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:54:43.329355    6200 main.go:141] libmachine: (force-systemd-env-504000) Downloading /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 10:54:43.725167    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | I0917 10:54:43.725111    6246 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/id_rsa...
	I0917 10:54:43.812026    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | I0917 10:54:43.811977    6246 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/force-systemd-env-504000.rawdisk...
	I0917 10:54:43.812046    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Writing magic tar header
	I0917 10:54:43.812056    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Writing SSH key tar header
	I0917 10:54:43.812358    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | I0917 10:54:43.812327    6246 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000 ...
	I0917 10:54:44.274638    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:44.274666    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/hyperkit.pid
	I0917 10:54:44.274723    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Using UUID cd07cfc8-d443-4a96-9587-c0774bd6992a
	I0917 10:54:44.300267    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Generated MAC 8a:84:45:bc:50:a1
	I0917 10:54:44.300283    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-504000
	I0917 10:54:44.300316    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"cd07cfc8-d443-4a96-9587-c0774bd6992a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000a8330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:54:44.300343    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"cd07cfc8-d443-4a96-9587-c0774bd6992a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000a8330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:54:44.300401    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "cd07cfc8-d443-4a96-9587-c0774bd6992a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/force-systemd-env-504000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-sys
temd-env-504000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-504000"}
	I0917 10:54:44.300451    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U cd07cfc8-d443-4a96-9587-c0774bd6992a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/force-systemd-env-504000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/bzimage,/Users/jenkins/minikube-integration/19
662-1558/.minikube/machines/force-systemd-env-504000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-504000"
	I0917 10:54:44.300464    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:54:44.303584    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 DEBUG: hyperkit: Pid is 6256
	I0917 10:54:44.303984    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 0
	I0917 10:54:44.303996    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:44.304082    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:54:44.305213    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:54:44.305282    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:44.305293    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:44.305320    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:44.305335    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:44.305344    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:44.305349    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:44.305356    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:44.305363    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:44.305375    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:44.305386    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:44.305393    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:44.305400    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:44.305408    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:44.305416    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:44.305423    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:44.305430    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:44.305484    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:44.305527    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:44.311479    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:54:44.319652    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/force-systemd-env-504000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:54:44.320540    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:54:44.320554    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:54:44.320561    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:54:44.320569    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:54:44.697503    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:54:44.697518    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:54:44.812113    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:54:44.812130    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:54:44.812140    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:54:44.812163    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:54:44.813057    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:54:44.813069    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:54:46.306113    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 1
	I0917 10:54:46.306127    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:46.306225    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:54:46.307164    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:54:46.307187    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:46.307197    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:46.307209    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:46.307220    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:46.307246    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:46.307262    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:46.307272    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:46.307280    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:46.307298    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:46.307316    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:46.307327    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:46.307339    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:46.307350    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:46.307357    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:46.307365    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:46.307376    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:46.307390    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:46.307398    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:48.309152    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 2
	I0917 10:54:48.309164    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:48.309238    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:54:48.310145    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:54:48.310196    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:48.310206    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:48.310216    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:48.310226    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:48.310236    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:48.310244    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:48.310251    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:48.310257    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:48.310265    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:48.310272    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:48.310279    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:48.310285    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:48.310300    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:48.310312    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:48.310320    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:48.310328    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:48.310335    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:48.310341    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:50.193508    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:54:50.193625    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:54:50.193634    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:54:50.213219    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | 2024/09/17 10:54:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:54:50.311608    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 3
	I0917 10:54:50.311635    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:50.311801    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:54:50.313438    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:54:50.313526    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:50.313545    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:50.313578    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:50.313597    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:50.313634    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:50.313652    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:50.313662    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:50.313673    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:50.313681    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:50.313689    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:50.313701    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:50.313714    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:50.313734    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:50.313751    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:50.313767    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:50.313781    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:50.313791    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:50.313799    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:52.314828    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 4
	I0917 10:54:52.314846    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:52.314923    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:54:52.315817    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:54:52.315872    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:52.315889    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:52.315898    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:52.315905    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:52.315932    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:52.315955    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:52.315966    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:52.315973    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:52.315979    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:52.315985    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:52.315991    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:52.315998    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:52.316018    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:52.316033    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:52.316045    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:52.316054    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:52.316062    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:52.316073    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:54.316455    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 5
	I0917 10:54:54.316471    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:54.316522    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:54:54.317434    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:54:54.317495    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:54.317507    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:54.317516    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:54.317529    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:54.317539    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:54.317547    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:54.317560    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:54.317569    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:54.317575    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:54.317581    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:54.317599    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:54.317611    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:54.317628    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:54.317641    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:54.317652    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:54.317658    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:54.317664    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:54.317672    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:56.317746    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 6
	I0917 10:54:56.317759    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:56.317791    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:54:56.318665    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:54:56.318727    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:56.318739    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:56.318747    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:56.318753    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:56.318760    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:56.318767    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:56.318775    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:56.318785    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:56.318792    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:56.318800    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:56.318807    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:56.318815    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:56.318827    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:56.318843    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:56.318855    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:56.318863    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:56.318872    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:56.318884    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:54:58.320852    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 7
	I0917 10:54:58.320865    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:54:58.320926    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:54:58.321827    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:54:58.321900    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:54:58.321909    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:54:58.321917    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:54:58.321922    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:54:58.321931    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:54:58.321936    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:54:58.321952    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:54:58.321962    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:54:58.321969    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:54:58.321978    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:54:58.321993    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:54:58.322004    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:54:58.322012    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:54:58.322020    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:54:58.322027    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:54:58.322034    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:54:58.322041    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:54:58.322049    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:00.322524    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 8
	I0917 10:55:00.322539    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:00.322597    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:00.323472    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:00.323512    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:00.323520    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:00.323543    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:00.323558    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:00.323565    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:00.323573    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:00.323579    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:00.323585    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:00.323594    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:00.323603    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:00.323612    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:00.323619    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:00.323627    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:00.323633    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:00.323640    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:00.323652    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:00.323662    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:00.323681    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:02.325636    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 9
	I0917 10:55:02.325649    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:02.325713    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:02.326636    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:02.326674    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:02.326684    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:02.326691    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:02.326700    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:02.326708    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:02.326715    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:02.326722    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:02.326730    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:02.326738    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:02.326746    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:02.326753    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:02.326760    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:02.326776    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:02.326790    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:02.326797    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:02.326803    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:02.326811    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:02.326828    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:04.328814    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 10
	I0917 10:55:04.328835    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:04.328900    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:04.329791    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:04.329843    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:04.329853    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:04.329861    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:04.329866    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:04.329884    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:04.329893    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:04.329910    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:04.329921    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:04.329935    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:04.329947    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:04.329966    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:04.329973    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:04.329982    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:04.329995    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:04.330002    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:04.330010    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:04.330017    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:04.330025    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:06.330821    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 11
	I0917 10:55:06.330837    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:06.330902    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:06.331834    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:06.331879    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:06.331887    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:06.331897    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:06.331903    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:06.331909    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:06.331922    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:06.331928    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:06.331943    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:06.331958    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:06.331971    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:06.331987    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:06.331993    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:06.332000    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:06.332007    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:06.332014    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:06.332022    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:06.332028    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:06.332035    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:08.332119    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 12
	I0917 10:55:08.332133    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:08.332237    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:08.333108    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:08.333173    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:08.333183    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:08.333192    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:08.333205    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:08.333234    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:08.333247    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:08.333254    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:08.333262    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:08.333275    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:08.333283    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:08.333291    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:08.333298    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:08.333320    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:08.333331    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:08.333345    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:08.333353    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:08.333363    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:08.333371    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:10.333732    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 13
	I0917 10:55:10.333747    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:10.333804    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:10.334673    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:10.334718    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:10.334727    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:10.334736    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:10.334742    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:10.334749    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:10.334755    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:10.334762    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:10.334780    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:10.334788    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:10.334794    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:10.334803    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:10.334811    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:10.334823    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:10.334830    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:10.334835    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:10.334847    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:10.334867    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:10.334877    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:12.336659    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 14
	I0917 10:55:12.336671    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:12.336731    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:12.337551    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:12.337592    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:12.337601    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:12.337611    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:12.337618    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:12.337638    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:12.337648    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:12.337656    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:12.337662    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:12.337676    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:12.337689    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:12.337698    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:12.337706    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:12.337713    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:12.337721    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:12.337727    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:12.337739    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:12.337746    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:12.337753    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:14.339796    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 15
	I0917 10:55:14.339808    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:14.339874    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:14.340739    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:14.340780    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:14.340788    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:14.340805    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:14.340815    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:14.340823    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:14.340829    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:14.340849    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:14.340861    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:14.340877    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:14.340888    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:14.340898    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:14.340906    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:14.340913    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:14.340924    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:14.340931    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:14.340944    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:14.340954    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:14.340962    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:16.341191    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 16
	I0917 10:55:16.341204    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:16.341250    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:16.342209    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:16.342259    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:16.342277    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:16.342302    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:16.342329    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:16.342335    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:16.342368    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:16.342374    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:16.342386    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:16.342403    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:16.342414    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:16.342431    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:16.342444    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:16.342454    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:16.342464    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:16.342473    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:16.342479    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:16.342485    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:16.342493    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:18.343472    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 17
	I0917 10:55:18.343484    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:18.343529    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:18.344443    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:18.344494    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:18.344501    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:18.344509    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:18.344514    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:18.344536    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:18.344543    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:18.344549    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:18.344557    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:18.344577    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:18.344590    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:18.344598    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:18.344604    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:18.344611    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:18.344618    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:18.344637    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:18.344648    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:18.344657    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:18.344662    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:20.346667    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 18
	I0917 10:55:20.346685    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:20.346723    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:20.347584    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:20.347637    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:20.347652    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:20.347665    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:20.347671    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:20.347680    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:20.347686    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:20.347693    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:20.347700    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:20.347707    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:20.347714    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:20.347720    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:20.347726    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:20.347741    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:20.347753    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:20.347761    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:20.347769    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:20.347778    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:20.347785    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:22.349580    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 19
	I0917 10:55:22.349591    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:22.349667    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:22.350549    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:22.350605    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:22.350619    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:22.350639    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:22.350651    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:22.350658    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:22.350665    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:22.350671    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:22.350679    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:22.350709    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:22.350718    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:22.350725    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:22.350737    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:22.350745    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:22.350752    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:22.350762    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:22.350769    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:22.350791    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:22.350806    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:24.351345    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 20
	I0917 10:55:24.351363    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:24.351438    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:24.352450    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:24.352505    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:24.352518    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:24.352530    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:24.352536    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:24.352543    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:24.352551    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:24.352558    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:24.352565    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:24.352572    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:24.352580    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:24.352595    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:24.352607    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:24.352616    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:24.352624    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:24.352633    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:24.352645    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:24.352652    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:24.352660    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:26.354582    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 21
	I0917 10:55:26.354593    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:26.354710    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:26.355627    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:26.355665    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:26.355673    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:26.355683    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:26.355689    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:26.355696    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:26.355712    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:26.355721    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:26.355727    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:26.355736    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:26.355744    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:26.355752    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:26.355759    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:26.355766    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:26.355773    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:26.355780    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:26.355795    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:26.355806    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:26.355824    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:28.357838    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 22
	I0917 10:55:28.357852    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:28.357882    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:28.358764    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:28.358785    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:28.358802    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:28.358814    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:28.358821    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:28.358827    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:28.358834    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:28.358846    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:28.358860    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:28.358869    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:28.358877    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:28.358885    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:28.358892    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:28.358908    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:28.358922    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:28.358932    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:28.358940    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:28.358948    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:28.358956    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:30.360839    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 23
	I0917 10:55:30.360854    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:30.360902    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:30.361864    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:30.361906    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:30.361918    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:30.361925    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:30.361933    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:30.361941    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:30.361948    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:30.361968    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:30.361981    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:30.361989    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:30.361997    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:30.362012    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:30.362023    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:30.362032    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:30.362042    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:30.362067    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:30.362080    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:30.362087    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:30.362093    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:32.364018    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 24
	I0917 10:55:32.364030    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:32.364084    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:32.364953    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:32.364996    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:32.365006    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:32.365014    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:32.365024    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:32.365034    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:32.365042    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:32.365049    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:32.365056    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:32.365072    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:32.365085    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:32.365094    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:32.365102    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:32.365108    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:32.365119    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:32.365129    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:32.365155    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:32.365168    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:32.365178    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:34.365719    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 25
	I0917 10:55:34.365743    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:34.365831    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:34.366735    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:34.366794    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:34.366806    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:34.366827    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:34.366838    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:34.366853    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:34.366861    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:34.366868    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:34.366874    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:34.366880    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:34.366886    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:34.366892    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:34.366900    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:34.366907    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:34.366914    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:34.366920    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:34.366927    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:34.366941    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:34.366952    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:36.368987    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 26
	I0917 10:55:36.369000    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:36.369048    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:36.369931    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:36.369979    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:36.369989    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:36.369998    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:36.370005    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:36.370014    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:36.370020    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:36.370036    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:36.370049    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:36.370057    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:36.370065    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:36.370075    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:36.370082    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:36.370088    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:36.370095    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:36.370103    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:36.370108    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:36.370119    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:36.370126    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:38.370372    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 27
	I0917 10:55:38.370386    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:38.370441    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:38.371311    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:38.371376    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:38.371386    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:38.371393    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:38.371398    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:38.371424    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:38.371437    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:38.371452    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:38.371465    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:38.371481    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:38.371490    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:38.371497    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:38.371507    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:38.371516    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:38.371524    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:38.371539    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:38.371551    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:38.371566    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:38.371574    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:40.372421    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 28
	I0917 10:55:40.372433    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:40.372484    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:40.373410    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:40.373461    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:40.373472    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:40.373483    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:40.373492    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:40.373501    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:40.373517    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:40.373525    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:40.373536    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:40.373543    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:40.373551    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:40.373558    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:40.373565    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:40.373583    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:40.373594    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:40.373602    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:40.373612    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:40.373618    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:40.373626    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:42.374936    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Attempt 29
	I0917 10:55:42.374948    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:55:42.375019    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | hyperkit pid from json: 6256
	I0917 10:55:42.375873    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Searching for 8a:84:45:bc:50:a1 in /var/db/dhcpd_leases ...
	I0917 10:55:42.375934    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0917 10:55:42.375943    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:5:7b:7:a4:ad ID:1,b6:5:7b:7:a4:ad Lease:0x66eb12c0}
	I0917 10:55:42.375952    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb11dd}
	I0917 10:55:42.375957    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:52:cb:32:d:58:d0 ID:1,52:cb:32:d:58:d0 Lease:0x66eb1146}
	I0917 10:55:42.375964    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a6:73:4e:b6:30:47 ID:1,a6:73:4e:b6:30:47 Lease:0x66e9bf46}
	I0917 10:55:42.375972    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:76:f5:91:ff:23:de ID:1,76:f5:91:ff:23:de Lease:0x66eb111c}
	I0917 10:55:42.375977    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ba:6f:80:ed:37:74 ID:1,ba:6f:80:ed:37:74 Lease:0x66eb10e2}
	I0917 10:55:42.375993    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:36:b9:e1:f6:57:51 ID:1,36:b9:e1:f6:57:51 Lease:0x66eb0ead}
	I0917 10:55:42.376001    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:f6:c4:b0:de:48:ac ID:1,f6:c4:b0:de:48:ac Lease:0x66eb0e85}
	I0917 10:55:42.376008    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:fa:65:63:ee:c5 ID:1,26:fa:65:63:ee:c5 Lease:0x66eb0e25}
	I0917 10:55:42.376016    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f2:cf:ac:ee:44:14 ID:1,f2:cf:ac:ee:44:14 Lease:0x66eb0df7}
	I0917 10:55:42.376024    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:55:42.376031    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:55:42.376045    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0d7e}
	I0917 10:55:42.376059    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66e9bc64}
	I0917 10:55:42.376076    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:6a:24:c5:db:13:2c ID:1,6a:24:c5:db:13:2c Lease:0x66eb0a38}
	I0917 10:55:42.376088    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:22:d3:a6:a1:85:c8 ID:1,22:d3:a6:a1:85:c8 Lease:0x66e9b815}
	I0917 10:55:42.376107    6200 main.go:141] libmachine: (force-systemd-env-504000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:e2:d3:e5:a7:fc:ca ID:1,e2:d3:e5:a7:fc:ca Lease:0x66eb0613}
	I0917 10:55:44.377327    6200 client.go:171] duration metric: took 1m1.082043762s to LocalClient.Create
	I0917 10:55:46.377740    6200 start.go:128] duration metric: took 1m3.134916886s to createHost
	I0917 10:55:46.377753    6200 start.go:83] releasing machines lock for "force-systemd-env-504000", held for 1m3.135019597s
	W0917 10:55:46.377820    6200 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-504000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:84:45:bc:50:a1
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-504000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:84:45:bc:50:a1
	I0917 10:55:46.441197    6200 out.go:201] 
	W0917 10:55:46.462123    6200 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:84:45:bc:50:a1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:84:45:bc:50:a1
	W0917 10:55:46.462137    6200 out.go:270] * 
	* 
	W0917 10:55:46.462806    6200 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:55:46.525072    6200 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-504000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-504000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-504000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (182.137007ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-504000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-504000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-17 10:55:46.817211 -0700 PDT m=+3648.712023875
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-504000 -n force-systemd-env-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-504000 -n force-systemd-env-504000: exit status 7 (80.69844ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:55:46.895960    6280 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 10:55:46.895983    6280 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-504000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-504000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-504000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-504000: (5.261488191s)
--- FAIL: TestForceSystemdEnv (233.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 image ls --format short --alsologtostderr: exit status 14 (199.547299ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:17:17.692077    3760 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:17:17.694440    3760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:17:17.694450    3760 out.go:358] Setting ErrFile to fd 2...
	I0917 10:17:17.694455    3760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:17:17.694715    3760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:17:17.753637    3760 out.go:201] 
	W0917 10:17:17.775414    3760 out.go:270] X Exiting due to MK_USAGE: loading profile: unmarshal: unexpected end of JSON input
	X Exiting due to MK_USAGE: loading profile: unmarshal: unexpected end of JSON input
	W0917 10:17:17.775432    3760 out.go:270] * 
	* 
	W0917 10:17:17.777532    3760 out.go:293] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_image_87e4c92586e4a7ce40be5d809ce8cef1c4125060_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_image_87e4c92586e4a7ce40be5d809ce8cef1c4125060_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:17:17.814585    3760 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:263: listing image with minikube: exit status 14

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:17:17.692077    3760 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:17:17.694440    3760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:17:17.694450    3760 out.go:358] Setting ErrFile to fd 2...
	I0917 10:17:17.694455    3760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:17:17.694715    3760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:17:17.753637    3760 out.go:201] 
	W0917 10:17:17.775414    3760 out.go:270] X Exiting due to MK_USAGE: loading profile: unmarshal: unexpected end of JSON input
	X Exiting due to MK_USAGE: loading profile: unmarshal: unexpected end of JSON input
	W0917 10:17:17.775432    3760 out.go:270] * 
	* 
	W0917 10:17:17.777532    3760 out.go:293] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_image_87e4c92586e4a7ce40be5d809ce8cef1c4125060_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_image_87e4c92586e4a7ce40be5d809ce8cef1c4125060_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:17:17.814585    3760 out.go:201] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (224.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-744000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-744000 -v=7 --alsologtostderr
E0917 10:22:41.919486    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-744000 -v=7 --alsologtostderr: (27.121072315s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-744000 --wait=true -v=7 --alsologtostderr
E0917 10:23:58.510337    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:24:03.841419    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-744000 --wait=true -v=7 --alsologtostderr: exit status 90 (3m13.434728451s)

                                                
                                                
-- stdout --
	* [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	* Restarting existing hyperkit VM for "ha-744000" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	* Enabled addons: 
	
	* Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	* Restarting existing hyperkit VM for "ha-744000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-744000-m03" control-plane node in "ha-744000" cluster
	* Restarting existing hyperkit VM for "ha-744000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	  - env NO_PROXY=192.169.0.5
	  - env NO_PROXY=192.169.0.5,192.169.0.6
	* Verifying Kubernetes components...
	
	* Starting "ha-744000-m04" worker node in "ha-744000" cluster
	* Restarting existing hyperkit VM for "ha-744000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:23:04.382852    4318 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:23:04.383033    4318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:23:04.383038    4318 out.go:358] Setting ErrFile to fd 2...
	I0917 10:23:04.383042    4318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:23:04.383233    4318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:23:04.384637    4318 out.go:352] Setting JSON to false
	I0917 10:23:04.410020    4318 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3151,"bootTime":1726590633,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:23:04.410173    4318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:23:04.431516    4318 out.go:177] * [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:23:04.474507    4318 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:23:04.474563    4318 notify.go:220] Checking for updates...
	I0917 10:23:04.517356    4318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:04.538348    4318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:23:04.559339    4318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:23:04.580471    4318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:23:04.622325    4318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:23:04.644148    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:04.644323    4318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:23:04.645084    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.645147    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:04.654766    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51897
	I0917 10:23:04.655119    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:04.655514    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:04.655526    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:04.655751    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:04.655871    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.684288    4318 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:23:04.726365    4318 start.go:297] selected driver: hyperkit
	I0917 10:23:04.726395    4318 start.go:901] validating driver "hyperkit" against &{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:04.726649    4318 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:23:04.726838    4318 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:23:04.727063    4318 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:23:04.736820    4318 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:23:04.742830    4318 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.742852    4318 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:23:04.746401    4318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:23:04.746441    4318 cni.go:84] Creating CNI manager for ""
	I0917 10:23:04.746483    4318 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 10:23:04.746565    4318 start.go:340] cluster config:
	{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:04.746687    4318 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:23:04.789252    4318 out.go:177] * Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	I0917 10:23:04.810326    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:04.810440    4318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:23:04.810514    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:04.810708    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:04.810727    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:04.810905    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:04.811872    4318 start.go:360] acquireMachinesLock for ha-744000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:04.811982    4318 start.go:364] duration metric: took 85.186µs to acquireMachinesLock for "ha-744000"
	I0917 10:23:04.812017    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:04.812036    4318 fix.go:54] fixHost starting: 
	I0917 10:23:04.812477    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.812504    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:04.821489    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51899
	I0917 10:23:04.821836    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:04.822180    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:04.822195    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:04.822406    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:04.822525    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.822647    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:23:04.822729    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.822838    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 3812
	I0917 10:23:04.823848    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 3812 missing from process table
	I0917 10:23:04.823907    4318 fix.go:112] recreateIfNeeded on ha-744000: state=Stopped err=<nil>
	I0917 10:23:04.823932    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	W0917 10:23:04.824033    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:04.845116    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000" ...
	I0917 10:23:04.866254    4318 main.go:141] libmachine: (ha-744000) Calling .Start
	I0917 10:23:04.866533    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.866553    4318 main.go:141] libmachine: (ha-744000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid
	I0917 10:23:04.868308    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 3812 missing from process table
	I0917 10:23:04.868320    4318 main.go:141] libmachine: (ha-744000) DBG | pid 3812 is in state "Stopped"
	I0917 10:23:04.868338    4318 main.go:141] libmachine: (ha-744000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid...
	I0917 10:23:04.868639    4318 main.go:141] libmachine: (ha-744000) DBG | Using UUID bcb5b96f-4d12-41bd-81db-c015832629bb
	I0917 10:23:04.980045    4318 main.go:141] libmachine: (ha-744000) DBG | Generated MAC 36:e3:93:ff:24:96
	I0917 10:23:04.980073    4318 main.go:141] libmachine: (ha-744000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:04.980180    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfce0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:04.980209    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfce0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:04.980265    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bcb5b96f-4d12-41bd-81db-c015832629bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:04.980311    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bcb5b96f-4d12-41bd-81db-c015832629bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:04.980327    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:04.981797    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Pid is 4331
	I0917 10:23:04.982233    4318 main.go:141] libmachine: (ha-744000) DBG | Attempt 0
	I0917 10:23:04.982246    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.982323    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:23:04.983974    4318 main.go:141] libmachine: (ha-744000) DBG | Searching for 36:e3:93:ff:24:96 in /var/db/dhcpd_leases ...
	I0917 10:23:04.984040    4318 main.go:141] libmachine: (ha-744000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:04.984071    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:04.984087    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c3c}
	I0917 10:23:04.984115    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0ba8}
	I0917 10:23:04.984133    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0b36}
	I0917 10:23:04.984146    4318 main.go:141] libmachine: (ha-744000) DBG | Found match: 36:e3:93:ff:24:96
	I0917 10:23:04.984156    4318 main.go:141] libmachine: (ha-744000) DBG | IP: 192.169.0.5
	I0917 10:23:04.984188    4318 main.go:141] libmachine: (ha-744000) Calling .GetConfigRaw
	I0917 10:23:04.984817    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:04.984996    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:04.985438    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:04.985457    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.985603    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:04.985698    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:04.985789    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:04.985886    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:04.985975    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:04.986095    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:04.986288    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:04.986295    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:04.989700    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:05.044525    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:05.045631    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:05.045647    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:05.045654    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:05.045662    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:05.426657    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:05.426678    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:05.541316    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:05.541359    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:05.541371    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:05.541450    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:05.542317    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:05.542326    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:23:11.152568    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:23:11.152612    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:23:11.152621    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:23:11.176948    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:23:14.298215    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0917 10:23:17.357957    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:23:17.357984    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.358136    4318 buildroot.go:166] provisioning hostname "ha-744000"
	I0917 10:23:17.358148    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.358261    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.358357    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.358444    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.358547    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.358661    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.358802    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.358948    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.358957    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000 && echo "ha-744000" | sudo tee /etc/hostname
	I0917 10:23:17.423407    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000
	
	I0917 10:23:17.423427    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.423563    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.423676    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.423778    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.423878    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.424023    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.424163    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.424174    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:23:17.486445    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:23:17.486467    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:23:17.486482    4318 buildroot.go:174] setting up certificates
	I0917 10:23:17.486490    4318 provision.go:84] configureAuth start
	I0917 10:23:17.486499    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.486623    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:17.486725    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.486807    4318 provision.go:143] copyHostCerts
	I0917 10:23:17.486836    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:17.486889    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:23:17.486897    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:17.487028    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:23:17.487256    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:17.487285    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:23:17.487290    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:17.487357    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:23:17.487493    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:17.487527    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:23:17.487531    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:17.487595    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:23:17.487731    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000 san=[127.0.0.1 192.169.0.5 ha-744000 localhost minikube]
	I0917 10:23:17.613185    4318 provision.go:177] copyRemoteCerts
	I0917 10:23:17.613267    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:23:17.613292    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.613443    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.613545    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.613632    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.613733    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:17.649429    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:23:17.649501    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:23:17.668769    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:23:17.668834    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 10:23:17.688500    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:23:17.688567    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:23:17.707535    4318 provision.go:87] duration metric: took 221.030078ms to configureAuth
	I0917 10:23:17.707546    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:23:17.707708    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:17.707721    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:17.707852    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.707942    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.708031    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.708110    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.708196    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.708323    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.708452    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.708459    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:23:17.762984    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:23:17.762996    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:23:17.763071    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:23:17.763083    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.763221    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.763321    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.763414    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.763501    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.763654    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.763786    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.763831    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:23:17.831028    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:23:17.831050    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.831198    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.831285    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.831382    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.831474    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.831619    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.831766    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.831778    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:23:19.502053    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:23:19.502067    4318 machine.go:96] duration metric: took 14.516529187s to provisionDockerMachine
	I0917 10:23:19.502080    4318 start.go:293] postStartSetup for "ha-744000" (driver="hyperkit")
	I0917 10:23:19.502098    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:23:19.502109    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.502292    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:23:19.502308    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.502398    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.502495    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.502582    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.502683    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.538092    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:23:19.544386    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:23:19.544403    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:23:19.544498    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:23:19.544649    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:23:19.544655    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:23:19.544826    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:23:19.556994    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:19.591561    4318 start.go:296] duration metric: took 89.471125ms for postStartSetup
	I0917 10:23:19.591589    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.591778    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:23:19.591792    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.591890    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.591986    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.592094    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.592189    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.628129    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:23:19.628204    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:23:19.683042    4318 fix.go:56] duration metric: took 14.870917903s for fixHost
	I0917 10:23:19.683065    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.683198    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.683290    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.683390    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.683480    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.683627    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:19.683766    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:19.683773    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:23:19.738877    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593799.774557135
	
	I0917 10:23:19.738891    4318 fix.go:216] guest clock: 1726593799.774557135
	I0917 10:23:19.738896    4318 fix.go:229] Guest: 2024-09-17 10:23:19.774557135 -0700 PDT Remote: 2024-09-17 10:23:19.683055 -0700 PDT m=+15.339523666 (delta=91.502135ms)
	I0917 10:23:19.738917    4318 fix.go:200] guest clock delta is within tolerance: 91.502135ms
	I0917 10:23:19.738921    4318 start.go:83] releasing machines lock for "ha-744000", held for 14.926834615s
	I0917 10:23:19.738935    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739067    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:19.739167    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739471    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739568    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739641    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:23:19.739673    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.739721    4318 ssh_runner.go:195] Run: cat /version.json
	I0917 10:23:19.739736    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.739766    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.739840    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.739856    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.739947    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.739962    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.740048    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.740062    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.740142    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.774171    4318 ssh_runner.go:195] Run: systemctl --version
	I0917 10:23:19.817235    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:23:19.822623    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:23:19.822678    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:23:19.837890    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:23:19.837904    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:19.838006    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:19.853023    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:23:19.862093    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:23:19.871068    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:23:19.871113    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:23:19.879912    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:19.888688    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:23:19.897529    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:19.906364    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:23:19.915519    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:23:19.924345    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:23:19.933204    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:23:19.942066    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:23:19.950115    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:23:19.958120    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:20.050394    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:23:20.067714    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:20.067803    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:23:20.081564    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:20.097350    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:23:20.111548    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:20.122410    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:20.132513    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:23:20.154104    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:20.164678    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:20.179449    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:23:20.182399    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:23:20.189403    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:23:20.202719    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:23:20.301120    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:23:20.410774    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:23:20.410853    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:23:20.425592    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:20.533399    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:23:22.845501    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31206782s)
	I0917 10:23:22.845569    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:23:22.857323    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:23:22.872057    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:22.882229    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:23:22.972546    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:23:23.076325    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.190977    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:23:23.204628    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:23.215649    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.315122    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:23:23.379549    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:23:23.379639    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:23:23.384126    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:23:23.384195    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:23:23.387269    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:23:23.412842    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:23:23.412931    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:23.429633    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:23.488622    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:23:23.488658    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:23.488993    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:23:23.492752    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:23.502567    4318 kubeadm.go:883] updating cluster {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 10:23:23.502656    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:23.502726    4318 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:23:23.518379    4318 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:23:23.518391    4318 docker.go:615] Images already preloaded, skipping extraction
	I0917 10:23:23.518479    4318 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:23:23.534156    4318 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:23:23.534175    4318 cache_images.go:84] Images are preloaded, skipping loading
	I0917 10:23:23.534195    4318 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 10:23:23.534287    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:23:23.534379    4318 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:23:23.569331    4318 cni.go:84] Creating CNI manager for ""
	I0917 10:23:23.569343    4318 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 10:23:23.569361    4318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:23:23.569378    4318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-744000 NodeName:ha-744000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:23:23.569456    4318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-744000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:23:23.569470    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:23:23.569527    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:23:23.582869    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:23:23.582932    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:23:23.582986    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:23:23.591650    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:23:23.591706    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 10:23:23.600248    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 10:23:23.613597    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:23:23.626900    4318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 10:23:23.640890    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:23:23.654403    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:23:23.657129    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:23.666988    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.767317    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:23.779290    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.5
	I0917 10:23:23.779301    4318 certs.go:194] generating shared ca certs ...
	I0917 10:23:23.779311    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.779465    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:23:23.779530    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:23:23.779541    4318 certs.go:256] generating profile certs ...
	I0917 10:23:23.779629    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:23:23.779650    4318 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17
	I0917 10:23:23.779666    4318 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0917 10:23:23.841071    4318 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 ...
	I0917 10:23:23.841087    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17: {Name:mkab82f9fd921972a929c6516cc39a0a941fac49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.841637    4318 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17 ...
	I0917 10:23:23.841647    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17: {Name:mke24af4c0eaf07f776b7fe40f78c9c251937399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.841917    4318 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt
	I0917 10:23:23.842125    4318 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key
	I0917 10:23:23.842361    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:23:23.842370    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:23:23.842393    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:23:23.842415    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:23:23.842434    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:23:23.842453    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:23:23.842471    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:23:23.842488    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:23:23.842505    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:23:23.842587    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:23:23.842622    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:23:23.842630    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:23:23.842662    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:23:23.842691    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:23:23.842724    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:23:23.842794    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:23.842828    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:23:23.842858    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:23.842876    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:23:23.843373    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:23:23.870080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:23:23.894949    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:23:23.914532    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:23:23.943260    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:23:23.966311    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:23:23.996612    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:23:24.032495    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:23:24.071443    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:23:24.109203    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:23:24.145982    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:23:24.196620    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:23:24.212031    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:23:24.216442    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:23:24.225794    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.229210    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.229255    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.233534    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:23:24.242685    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:23:24.251758    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.255864    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.255908    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.260126    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:23:24.269138    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:23:24.278092    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.281460    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.281501    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.285770    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:23:24.294687    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:23:24.298152    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:23:24.302803    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:23:24.307168    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:23:24.311812    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:23:24.316345    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:23:24.320697    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:23:24.325019    4318 kubeadm.go:392] StartCluster: {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:24.325142    4318 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:23:24.337612    4318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:23:24.345939    4318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:23:24.345951    4318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:23:24.345995    4318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:23:24.354304    4318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:23:24.354625    4318 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-744000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.354704    4318 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1558/kubeconfig needs updating (will repair): [kubeconfig missing "ha-744000" cluster setting kubeconfig missing "ha-744000" context setting]
	I0917 10:23:24.354943    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.355336    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.355573    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:23:24.355889    4318 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 10:23:24.356070    4318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:23:24.364125    4318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 10:23:24.364137    4318 kubeadm.go:597] duration metric: took 18.181933ms to restartPrimaryControlPlane
	I0917 10:23:24.364142    4318 kubeadm.go:394] duration metric: took 39.129847ms to StartCluster
	I0917 10:23:24.364150    4318 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.364222    4318 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.364601    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.364822    4318 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:23:24.364835    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:23:24.364845    4318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:23:24.365364    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:24.407801    4318 out.go:177] * Enabled addons: 
	I0917 10:23:24.449987    4318 addons.go:510] duration metric: took 84.961836ms for enable addons: enabled=[]
	I0917 10:23:24.450005    4318 start.go:246] waiting for cluster config update ...
	I0917 10:23:24.450011    4318 start.go:255] writing updated cluster config ...
	I0917 10:23:24.470905    4318 out.go:201] 
	I0917 10:23:24.492266    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:24.492406    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.514885    4318 out.go:177] * Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	I0917 10:23:24.556844    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:24.556881    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:24.557072    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:24.557091    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:24.557227    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.558233    4318 start.go:360] acquireMachinesLock for ha-744000-m02: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:24.558336    4318 start.go:364] duration metric: took 78.234µs to acquireMachinesLock for "ha-744000-m02"
	I0917 10:23:24.558362    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:24.558375    4318 fix.go:54] fixHost starting: m02
	I0917 10:23:24.558805    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:24.558841    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:24.567958    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51922
	I0917 10:23:24.568283    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:24.568655    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:24.568674    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:24.568935    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:24.569064    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:24.569164    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:23:24.569268    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.569346    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4278
	I0917 10:23:24.570356    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4278 missing from process table
	I0917 10:23:24.570389    4318 fix.go:112] recreateIfNeeded on ha-744000-m02: state=Stopped err=<nil>
	I0917 10:23:24.570398    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	W0917 10:23:24.570487    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:24.612951    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m02" ...
	I0917 10:23:24.633920    4318 main.go:141] libmachine: (ha-744000-m02) Calling .Start
	I0917 10:23:24.634199    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.634258    4318 main.go:141] libmachine: (ha-744000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid
	I0917 10:23:24.636176    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4278 missing from process table
	I0917 10:23:24.636188    4318 main.go:141] libmachine: (ha-744000-m02) DBG | pid 4278 is in state "Stopped"
	I0917 10:23:24.636209    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid...
	I0917 10:23:24.636621    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Using UUID 84417734-d0f3-4fed-a88c-11fa06a6299e
	I0917 10:23:24.663465    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Generated MAC 72:92:6:7e:7d:92
	I0917 10:23:24.663489    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:24.663621    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:24.663651    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:24.663689    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "84417734-d0f3-4fed-a88c-11fa06a6299e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:24.663725    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 84417734-d0f3-4fed-a88c-11fa06a6299e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:24.663736    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:24.665138    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Pid is 4339
	I0917 10:23:24.665538    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Attempt 0
	I0917 10:23:24.665551    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.665623    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:23:24.667294    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Searching for 72:92:6:7e:7d:92 in /var/db/dhcpd_leases ...
	I0917 10:23:24.667331    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:24.667353    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:23:24.667370    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:24.667381    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c3c}
	I0917 10:23:24.667387    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Found match: 72:92:6:7e:7d:92
	I0917 10:23:24.667404    4318 main.go:141] libmachine: (ha-744000-m02) DBG | IP: 192.169.0.6
	I0917 10:23:24.667444    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetConfigRaw
	I0917 10:23:24.668104    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:24.668293    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.668710    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:24.668719    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:24.668846    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:24.668942    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:24.669029    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:24.669114    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:24.669205    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:24.669366    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:24.669585    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:24.669596    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:24.672842    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:24.682575    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:24.683443    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:24.683460    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:24.683476    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:24.683483    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:25.071063    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:25.071079    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:25.186245    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:25.186263    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:25.186274    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:25.186284    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:25.187156    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:25.187168    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:23:30.799209    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:23:30.799230    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:23:30.799236    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:23:30.822685    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:23:33.867917    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0917 10:23:36.934481    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:23:36.934496    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:36.934638    4318 buildroot.go:166] provisioning hostname "ha-744000-m02"
	I0917 10:23:36.934649    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:36.934745    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:36.934837    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:36.934932    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:36.935015    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:36.935112    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:36.935288    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:36.935440    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:36.935451    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m02 && echo "ha-744000-m02" | sudo tee /etc/hostname
	I0917 10:23:37.008879    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m02
	
	I0917 10:23:37.008894    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.009061    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.009159    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.009242    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.009338    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.009486    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.009649    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.009660    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:23:37.078741    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:23:37.078758    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:23:37.078768    4318 buildroot.go:174] setting up certificates
	I0917 10:23:37.078774    4318 provision.go:84] configureAuth start
	I0917 10:23:37.078780    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:37.078916    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:37.079043    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.079131    4318 provision.go:143] copyHostCerts
	I0917 10:23:37.079159    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:37.079221    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:23:37.079228    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:37.079376    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:23:37.079595    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:37.079637    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:23:37.079642    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:37.079718    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:23:37.079893    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:37.079933    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:23:37.079938    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:37.080019    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:23:37.080160    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m02 san=[127.0.0.1 192.169.0.6 ha-744000-m02 localhost minikube]
	I0917 10:23:37.154648    4318 provision.go:177] copyRemoteCerts
	I0917 10:23:37.154702    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:23:37.154717    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.154843    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.154952    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.155045    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.155124    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:37.199228    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:23:37.199298    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:23:37.219018    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:23:37.219098    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:23:37.237862    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:23:37.237936    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:23:37.256979    4318 provision.go:87] duration metric: took 178.197064ms to configureAuth
	I0917 10:23:37.256993    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:23:37.257173    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:37.257186    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:37.257323    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.257405    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.257494    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.257572    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.257650    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.257770    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.257893    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.257901    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:23:37.319570    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:23:37.319583    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:23:37.319682    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:23:37.319696    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.319826    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.319938    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.320027    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.320108    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.320250    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.320387    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.320434    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:23:37.391815    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:23:37.391831    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.391975    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.392081    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.392159    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.392252    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.392374    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.392517    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.392529    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:23:39.075500    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:23:39.075515    4318 machine.go:96] duration metric: took 14.406707663s to provisionDockerMachine
	I0917 10:23:39.075523    4318 start.go:293] postStartSetup for "ha-744000-m02" (driver="hyperkit")
	I0917 10:23:39.075537    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:23:39.075547    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.075750    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:23:39.075764    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.075857    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.075952    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.076033    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.076151    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.119221    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:23:39.122818    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:23:39.122833    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:23:39.122960    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:23:39.123143    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:23:39.123150    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:23:39.123359    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:23:39.133517    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:39.159170    4318 start.go:296] duration metric: took 83.636865ms for postStartSetup
	I0917 10:23:39.159198    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.159385    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:23:39.159399    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.159480    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.159562    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.159664    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.159748    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.198408    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:23:39.198471    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:23:39.229469    4318 fix.go:56] duration metric: took 14.671003724s for fixHost
	I0917 10:23:39.229492    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.229627    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.229719    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.229810    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.229886    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.230020    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:39.230204    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:39.230212    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:23:39.293184    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593819.261870922
	
	I0917 10:23:39.293196    4318 fix.go:216] guest clock: 1726593819.261870922
	I0917 10:23:39.293204    4318 fix.go:229] Guest: 2024-09-17 10:23:39.261870922 -0700 PDT Remote: 2024-09-17 10:23:39.229481 -0700 PDT m=+34.885826601 (delta=32.389922ms)
	I0917 10:23:39.293215    4318 fix.go:200] guest clock delta is within tolerance: 32.389922ms
	I0917 10:23:39.293218    4318 start.go:83] releasing machines lock for "ha-744000-m02", held for 14.734778852s
	I0917 10:23:39.293233    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.293362    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:39.314064    4318 out.go:177] * Found network options:
	I0917 10:23:39.336076    4318 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 10:23:39.357954    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:23:39.357993    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.358861    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.359070    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.359183    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:23:39.359227    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	W0917 10:23:39.359301    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:23:39.359362    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.359383    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:23:39.359396    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.359477    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.359514    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.359570    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.359617    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.359685    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.359724    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.359838    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:23:39.394282    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:23:39.394363    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:23:39.443373    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:23:39.443395    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:39.443489    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:39.459065    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:23:39.468374    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:23:39.477348    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:23:39.477400    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:23:39.486283    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:39.495295    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:23:39.504241    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:39.513081    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:23:39.522253    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:23:39.531218    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:23:39.540147    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:23:39.549122    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:23:39.557208    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:23:39.565185    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:39.663216    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:23:39.682558    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:39.682635    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:23:39.697642    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:39.710638    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:23:39.730208    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:39.740809    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:39.751126    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:23:39.776526    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:39.786854    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:39.801713    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:23:39.804604    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:23:39.811689    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:23:39.825130    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:23:39.919765    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:23:40.027561    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:23:40.027584    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:23:40.041479    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:40.155257    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:23:42.501803    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.346511037s)
	I0917 10:23:42.501877    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:23:42.512430    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:23:42.525247    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:42.535597    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:23:42.632719    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:23:42.733072    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:42.848472    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:23:42.862095    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:42.873097    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:42.974162    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:23:43.038704    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:23:43.038791    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:23:43.043279    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:23:43.043348    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:23:43.046420    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:23:43.072844    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:23:43.072933    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:43.089215    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:43.128559    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:23:43.170903    4318 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 10:23:43.192137    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:43.192563    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:23:43.197213    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:43.206867    4318 mustload.go:65] Loading cluster: ha-744000
	I0917 10:23:43.207054    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:43.207326    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:43.207347    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:43.216115    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51945
	I0917 10:23:43.216443    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:43.216788    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:43.216802    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:43.217026    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:43.217137    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:23:43.217215    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:43.217301    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:23:43.218337    4318 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:23:43.218598    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:43.218625    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:43.227260    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51947
	I0917 10:23:43.227601    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:43.227937    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:43.227951    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:43.228147    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:43.228251    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:43.228345    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.6
	I0917 10:23:43.228352    4318 certs.go:194] generating shared ca certs ...
	I0917 10:23:43.228362    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:43.228527    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:23:43.228599    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:23:43.228607    4318 certs.go:256] generating profile certs ...
	I0917 10:23:43.228718    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:23:43.228804    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.026a9cc7
	I0917 10:23:43.228872    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:23:43.228880    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:23:43.228899    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:23:43.228920    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:23:43.228937    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:23:43.228954    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:23:43.228981    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:23:43.229010    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:23:43.229028    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:23:43.229119    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:23:43.229166    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:23:43.229175    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:23:43.229206    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:23:43.229242    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:23:43.229274    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:23:43.229342    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:43.229373    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.229393    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.229410    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.229434    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:43.229530    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:43.229617    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:43.229683    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:43.229765    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:43.256849    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 10:23:43.260879    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 10:23:43.269481    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 10:23:43.272632    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 10:23:43.280513    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 10:23:43.283582    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 10:23:43.291364    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 10:23:43.294480    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 10:23:43.302789    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 10:23:43.305925    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 10:23:43.313934    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 10:23:43.316968    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 10:23:43.325080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:23:43.345191    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:23:43.364654    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:23:43.384379    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:23:43.404164    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:23:43.424264    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:23:43.444115    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:23:43.463631    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:23:43.483492    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:23:43.502975    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:23:43.522485    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:23:43.543691    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 10:23:43.558295    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 10:23:43.571956    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 10:23:43.585450    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 10:23:43.598936    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 10:23:43.612569    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 10:23:43.626000    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 10:23:43.639468    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:23:43.643552    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:23:43.652183    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.655515    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.655555    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.659696    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:23:43.668232    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:23:43.676488    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.679940    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.679985    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.684222    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:23:43.692551    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:23:43.700894    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.704479    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.704526    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.708650    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:23:43.716969    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:23:43.720371    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:23:43.724736    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:23:43.728968    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:23:43.733213    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:23:43.737400    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:23:43.741597    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:23:43.745820    4318 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0917 10:23:43.745877    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:23:43.745890    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:23:43.745926    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:23:43.758434    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:23:43.758473    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:23:43.758527    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:23:43.766283    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:23:43.766331    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 10:23:43.773641    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 10:23:43.786920    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:23:43.800443    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:23:43.813790    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:23:43.816730    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:43.826099    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:43.934702    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:43.949825    4318 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:23:43.950025    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:43.971583    4318 out.go:177] * Verifying Kubernetes components...
	I0917 10:23:44.013350    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:44.148955    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:44.167233    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:44.167427    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 10:23:44.167473    4318 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 10:23:44.167643    4318 node_ready.go:35] waiting up to 6m0s for node "ha-744000-m02" to be "Ready" ...
	I0917 10:23:44.167726    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:44.167731    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:44.167739    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:44.167743    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.307737    4318 round_trippers.go:574] Response Status: 200 OK in 8139 milliseconds
	I0917 10:23:52.308306    4318 node_ready.go:49] node "ha-744000-m02" has status "Ready":"True"
	I0917 10:23:52.308317    4318 node_ready.go:38] duration metric: took 8.140607385s for node "ha-744000-m02" to be "Ready" ...
	I0917 10:23:52.308324    4318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:23:52.308363    4318 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 10:23:52.308373    4318 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 10:23:52.308426    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:52.308431    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.308441    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.308444    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.320722    4318 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0917 10:23:52.327343    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.327408    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j9jcc
	I0917 10:23:52.327415    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.327421    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.327424    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.333529    4318 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 10:23:52.334030    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.334039    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.334045    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.334048    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.338396    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:52.338672    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.338681    4318 pod_ready.go:82] duration metric: took 11.322168ms for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.338688    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.338729    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-khnlh
	I0917 10:23:52.338734    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.338739    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.338744    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.344023    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:23:52.344589    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.344597    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.344602    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.344606    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.349539    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:52.349983    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.349992    4318 pod_ready.go:82] duration metric: took 11.298293ms for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.349999    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.350040    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000
	I0917 10:23:52.350045    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.350051    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.350055    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.357637    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:23:52.358005    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.358013    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.358019    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.358027    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.365136    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:23:52.365716    4318 pod_ready.go:93] pod "etcd-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.365726    4318 pod_ready.go:82] duration metric: took 15.722025ms for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.365733    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.365780    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m02
	I0917 10:23:52.365789    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.365795    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.365799    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.369072    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.369567    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:52.369575    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.369581    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.369584    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.373049    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.373553    4318 pod_ready.go:93] pod "etcd-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.373563    4318 pod_ready.go:82] duration metric: took 7.825215ms for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.373570    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.373616    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:23:52.373621    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.373626    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.373631    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.376282    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.509242    4318 request.go:632] Waited for 132.500318ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:52.509283    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:52.509290    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.509317    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.509323    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.513207    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.513696    4318 pod_ready.go:93] pod "etcd-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.513705    4318 pod_ready.go:82] duration metric: took 140.128679ms for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.513724    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.709621    4318 request.go:632] Waited for 195.859717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:23:52.709653    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:23:52.709657    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.709664    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.709669    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.711912    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.908496    4318 request.go:632] Waited for 196.021957ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.908552    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.908558    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.908563    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.908566    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.911337    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.911774    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.911783    4318 pod_ready.go:82] duration metric: took 398.052058ms for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.911790    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.108964    4318 request.go:632] Waited for 197.132834ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:23:53.109014    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:23:53.109019    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.109025    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.109029    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.112077    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:53.308769    4318 request.go:632] Waited for 196.065261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:53.308824    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:53.308830    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.308836    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.308840    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.313525    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:53.313816    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:53.313826    4318 pod_ready.go:82] duration metric: took 402.029202ms for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.313836    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.509951    4318 request.go:632] Waited for 196.074667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:23:53.509985    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:23:53.509990    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.510035    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.510042    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.514822    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:53.709150    4318 request.go:632] Waited for 193.647696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:53.709201    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:53.709210    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.709254    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.709264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.712954    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:53.713373    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:53.713382    4318 pod_ready.go:82] duration metric: took 399.538201ms for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.713389    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.908806    4318 request.go:632] Waited for 195.370205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:23:53.908887    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:23:53.908897    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.908909    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.908917    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.911967    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.108997    4318 request.go:632] Waited for 196.429766ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:54.109063    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:54.109070    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.109082    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.109089    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.112475    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.114386    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.114395    4318 pod_ready.go:82] duration metric: took 400.998189ms for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.114402    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.308794    4318 request.go:632] Waited for 194.35354ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:23:54.308838    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:23:54.308874    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.308882    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.308915    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.311225    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:54.508611    4318 request.go:632] Waited for 197.017438ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:54.508643    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:54.508648    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.508654    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.508658    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.513358    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:54.514643    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.514653    4318 pod_ready.go:82] duration metric: took 400.244458ms for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.514660    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.709389    4318 request.go:632] Waited for 194.662221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:23:54.709498    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:23:54.709508    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.709517    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.709522    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.712945    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.908904    4318 request.go:632] Waited for 195.122532ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:54.908956    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:54.908964    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.908976    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.908984    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.912489    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.912833    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.912844    4318 pod_ready.go:82] duration metric: took 398.175427ms for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.912853    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.109718    4318 request.go:632] Waited for 196.795087ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:23:55.109851    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:23:55.109863    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.109874    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.109880    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.113014    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:55.310231    4318 request.go:632] Waited for 196.716951ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:23:55.310297    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:23:55.310304    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.310310    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.310327    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.312467    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.312877    4318 pod_ready.go:93] pod "kube-proxy-66bkb" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:55.312887    4318 pod_ready.go:82] duration metric: took 400.026129ms for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.312894    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.508659    4318 request.go:632] Waited for 195.71304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:23:55.508705    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:23:55.508714    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.508762    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.508776    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.511406    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.709478    4318 request.go:632] Waited for 197.620419ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:55.709553    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:55.709561    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.709569    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.709573    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.712068    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.712400    4318 pod_ready.go:93] pod "kube-proxy-6xd2h" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:55.712409    4318 pod_ready.go:82] duration metric: took 399.507321ms for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.712415    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.908839    4318 request.go:632] Waited for 196.378567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:23:55.908879    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:23:55.908886    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.908894    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.908903    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.911317    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.108670    4318 request.go:632] Waited for 196.90743ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:56.108733    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:56.108741    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.108750    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.108755    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.111013    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.111432    4318 pod_ready.go:93] pod "kube-proxy-c5xbc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.111441    4318 pod_ready.go:82] duration metric: took 399.01941ms for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.111448    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.309131    4318 request.go:632] Waited for 197.638325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:23:56.309195    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:23:56.309203    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.309211    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.309218    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.311722    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.510036    4318 request.go:632] Waited for 197.949522ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:56.510102    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:56.510108    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.510114    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.510116    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.514224    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:56.514571    4318 pod_ready.go:93] pod "kube-proxy-k9xsp" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.514581    4318 pod_ready.go:82] duration metric: took 403.125717ms for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.514588    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.708850    4318 request.go:632] Waited for 194.175339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:23:56.708991    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:23:56.709003    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.709014    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.709019    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.712753    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:56.909408    4318 request.go:632] Waited for 196.094397ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:56.909453    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:56.909458    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.909464    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.909469    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.911617    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.911990    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.911998    4318 pod_ready.go:82] duration metric: took 397.403001ms for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.912004    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.108563    4318 request.go:632] Waited for 196.516714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:23:57.108623    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:23:57.108651    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.108657    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.108661    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.111145    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:57.310537    4318 request.go:632] Waited for 198.433255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:57.310658    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:57.310670    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.310681    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.310688    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.313850    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:57.314399    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:57.314411    4318 pod_ready.go:82] duration metric: took 402.398279ms for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.314420    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.508583    4318 request.go:632] Waited for 194.120837ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:23:57.508650    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:23:57.508656    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.508662    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.508667    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.510939    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:57.709335    4318 request.go:632] Waited for 198.006371ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:57.709452    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:57.709463    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.709475    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.709482    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.712690    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:57.713150    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:57.713163    4318 pod_ready.go:82] duration metric: took 398.73468ms for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.713172    4318 pod_ready.go:39] duration metric: took 5.404804093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:23:57.713193    4318 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:23:57.713279    4318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:23:57.724647    4318 api_server.go:72] duration metric: took 13.774712051s to wait for apiserver process to appear ...
	I0917 10:23:57.724659    4318 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:23:57.724675    4318 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 10:23:57.728863    4318 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 10:23:57.728906    4318 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 10:23:57.728911    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.728929    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.728935    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.729498    4318 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 10:23:57.729550    4318 api_server.go:141] control plane version: v1.31.1
	I0917 10:23:57.729558    4318 api_server.go:131] duration metric: took 4.895474ms to wait for apiserver health ...
	I0917 10:23:57.729563    4318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 10:23:57.909401    4318 request.go:632] Waited for 179.781674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:57.909604    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:57.909621    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.909636    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.909648    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.914890    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:23:57.920746    4318 system_pods.go:59] 26 kube-system pods found
	I0917 10:23:57.920767    4318 system_pods.go:61] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running
	I0917 10:23:57.920771    4318 system_pods.go:61] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running
	I0917 10:23:57.920774    4318 system_pods.go:61] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:23:57.920780    4318 system_pods.go:61] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 10:23:57.920785    4318 system_pods.go:61] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:23:57.920789    4318 system_pods.go:61] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:23:57.920791    4318 system_pods.go:61] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:23:57.920796    4318 system_pods.go:61] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 10:23:57.920802    4318 system_pods.go:61] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:23:57.920805    4318 system_pods.go:61] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:23:57.920808    4318 system_pods.go:61] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 10:23:57.920811    4318 system_pods.go:61] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:23:57.920815    4318 system_pods.go:61] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:23:57.920819    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 10:23:57.920824    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:23:57.920827    4318 system_pods.go:61] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:23:57.920829    4318 system_pods.go:61] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:23:57.920832    4318 system_pods.go:61] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:23:57.920836    4318 system_pods.go:61] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 10:23:57.920839    4318 system_pods.go:61] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:23:57.920844    4318 system_pods.go:61] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 10:23:57.920848    4318 system_pods.go:61] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:23:57.920851    4318 system_pods.go:61] "kube-vip-ha-744000" [4613d53e-c3b7-48eb-bb87-057beab671e7] Running
	I0917 10:23:57.920858    4318 system_pods.go:61] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:23:57.920862    4318 system_pods.go:61] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:23:57.920864    4318 system_pods.go:61] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:23:57.920868    4318 system_pods.go:74] duration metric: took 191.300068ms to wait for pod list to return data ...
	I0917 10:23:57.920876    4318 default_sa.go:34] waiting for default service account to be created ...
	I0917 10:23:58.108816    4318 request.go:632] Waited for 187.888047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:23:58.108877    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:23:58.108885    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.108893    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.108898    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.111818    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:58.111952    4318 default_sa.go:45] found service account: "default"
	I0917 10:23:58.111961    4318 default_sa.go:55] duration metric: took 191.079569ms for default service account to be created ...
	I0917 10:23:58.111967    4318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 10:23:58.309003    4318 request.go:632] Waited for 196.929892ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:58.309102    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:58.309111    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.309136    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.309143    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.314149    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:58.319524    4318 system_pods.go:86] 26 kube-system pods found
	I0917 10:23:58.319535    4318 system_pods.go:89] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running
	I0917 10:23:58.319541    4318 system_pods.go:89] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running
	I0917 10:23:58.319544    4318 system_pods.go:89] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:23:58.319549    4318 system_pods.go:89] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 10:23:58.319554    4318 system_pods.go:89] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:23:58.319557    4318 system_pods.go:89] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:23:58.319567    4318 system_pods.go:89] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:23:58.319571    4318 system_pods.go:89] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 10:23:58.319580    4318 system_pods.go:89] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:23:58.319584    4318 system_pods.go:89] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:23:58.319588    4318 system_pods.go:89] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 10:23:58.319591    4318 system_pods.go:89] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:23:58.319595    4318 system_pods.go:89] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:23:58.319599    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 10:23:58.319602    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:23:58.319612    4318 system_pods.go:89] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:23:58.319616    4318 system_pods.go:89] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:23:58.319618    4318 system_pods.go:89] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:23:58.319622    4318 system_pods.go:89] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 10:23:58.319628    4318 system_pods.go:89] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:23:58.319632    4318 system_pods.go:89] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 10:23:58.319635    4318 system_pods.go:89] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:23:58.319639    4318 system_pods.go:89] "kube-vip-ha-744000" [4613d53e-c3b7-48eb-bb87-057beab671e7] Running
	I0917 10:23:58.319642    4318 system_pods.go:89] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:23:58.319644    4318 system_pods.go:89] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:23:58.319647    4318 system_pods.go:89] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:23:58.319651    4318 system_pods.go:126] duration metric: took 207.678997ms to wait for k8s-apps to be running ...
	I0917 10:23:58.319662    4318 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 10:23:58.319720    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:23:58.331325    4318 system_svc.go:56] duration metric: took 11.65971ms WaitForService to wait for kubelet
	I0917 10:23:58.331338    4318 kubeadm.go:582] duration metric: took 14.381399967s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:23:58.331366    4318 node_conditions.go:102] verifying NodePressure condition ...
	I0917 10:23:58.509807    4318 request.go:632] Waited for 178.384911ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 10:23:58.509886    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 10:23:58.509895    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.509908    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.509913    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.514102    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:58.514949    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514961    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514970    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514973    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514976    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514979    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514982    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514995    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.515002    4318 node_conditions.go:105] duration metric: took 183.62967ms to run NodePressure ...
	I0917 10:23:58.515010    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:23:58.515030    4318 start.go:255] writing updated cluster config ...
	I0917 10:23:58.535539    4318 out.go:201] 
	I0917 10:23:58.573360    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:58.573455    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.595258    4318 out.go:177] * Starting "ha-744000-m03" control-plane node in "ha-744000" cluster
	I0917 10:23:58.653092    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:58.653125    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:58.653337    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:58.653370    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:58.653501    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.654346    4318 start.go:360] acquireMachinesLock for ha-744000-m03: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:58.654469    4318 start.go:364] duration metric: took 97.666µs to acquireMachinesLock for "ha-744000-m03"
	I0917 10:23:58.654496    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:58.654503    4318 fix.go:54] fixHost starting: m03
	I0917 10:23:58.655039    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:58.655076    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:58.665444    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0917 10:23:58.665867    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:58.666300    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:58.666321    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:58.666529    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:58.666645    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:23:58.666734    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetState
	I0917 10:23:58.666815    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.666929    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid from json: 3837
	I0917 10:23:58.667977    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid 3837 missing from process table
	I0917 10:23:58.668019    4318 fix.go:112] recreateIfNeeded on ha-744000-m03: state=Stopped err=<nil>
	I0917 10:23:58.668029    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	W0917 10:23:58.668111    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:58.707286    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m03" ...
	I0917 10:23:58.781042    4318 main.go:141] libmachine: (ha-744000-m03) Calling .Start
	I0917 10:23:58.781398    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.781451    4318 main.go:141] libmachine: (ha-744000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid
	I0917 10:23:58.783354    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid 3837 missing from process table
	I0917 10:23:58.783371    4318 main.go:141] libmachine: (ha-744000-m03) DBG | pid 3837 is in state "Stopped"
	I0917 10:23:58.783401    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid...
	I0917 10:23:58.783560    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Using UUID 2629e9cb-d7e0-4a36-a6bd-c4320ca3711f
	I0917 10:23:58.808610    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Generated MAC 5a:8d:be:33:c3:18
	I0917 10:23:58.808632    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:58.808748    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004040c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:58.808788    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004040c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:58.808853    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/ha-744000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:58.808899    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2629e9cb-d7e0-4a36-a6bd-c4320ca3711f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/ha-744000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:58.808915    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:58.810278    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Pid is 4346
	I0917 10:23:58.810623    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Attempt 0
	I0917 10:23:58.810633    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.810707    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid from json: 4346
	I0917 10:23:58.812422    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Searching for 5a:8d:be:33:c3:18 in /var/db/dhcpd_leases ...
	I0917 10:23:58.812491    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:58.812547    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:23:58.812578    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:23:58.812610    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:58.812627    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetConfigRaw
	I0917 10:23:58.812629    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0ba8}
	I0917 10:23:58.812645    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Found match: 5a:8d:be:33:c3:18
	I0917 10:23:58.812659    4318 main.go:141] libmachine: (ha-744000-m03) DBG | IP: 192.169.0.7
	I0917 10:23:58.813322    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:23:58.813511    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.814083    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:58.814095    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:23:58.814255    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:23:58.814354    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:23:58.814443    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:23:58.814551    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:23:58.814660    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:23:58.814840    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:58.815013    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:23:58.815022    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:58.818431    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:58.826878    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:58.827963    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:58.827996    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:58.828016    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:58.828056    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:59.216264    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:59.216286    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:59.331075    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:59.331093    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:59.331106    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:59.331113    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:59.331943    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:59.331953    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:24:04.953344    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:24:04.953400    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:24:04.953409    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:24:04.976712    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:24:08.843565    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.7:22: connect: connection refused
	I0917 10:24:11.901419    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:24:11.901434    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:11.901561    4318 buildroot.go:166] provisioning hostname "ha-744000-m03"
	I0917 10:24:11.901572    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:11.901663    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:11.901749    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:11.901841    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.901928    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.902023    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:11.902156    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:11.902302    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:11.902310    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m03 && echo "ha-744000-m03" | sudo tee /etc/hostname
	I0917 10:24:11.969021    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m03
	
	I0917 10:24:11.969036    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:11.969172    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:11.969284    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.969390    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.969484    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:11.969628    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:11.969778    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:11.969789    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:24:12.032993    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:24:12.033009    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:24:12.033021    4318 buildroot.go:174] setting up certificates
	I0917 10:24:12.033027    4318 provision.go:84] configureAuth start
	I0917 10:24:12.033034    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:12.033164    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:12.033268    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.033363    4318 provision.go:143] copyHostCerts
	I0917 10:24:12.033396    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:24:12.033443    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:24:12.033450    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:24:12.033597    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:24:12.033799    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:24:12.033838    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:24:12.033843    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:24:12.033926    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:24:12.034067    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:24:12.034095    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:24:12.034100    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:24:12.034194    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:24:12.034361    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m03 san=[127.0.0.1 192.169.0.7 ha-744000-m03 localhost minikube]
	I0917 10:24:12.149328    4318 provision.go:177] copyRemoteCerts
	I0917 10:24:12.149388    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:24:12.149403    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.149590    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.149685    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.149761    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.149846    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:12.184712    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:24:12.184807    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:24:12.204199    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:24:12.204267    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 10:24:12.223758    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:24:12.223831    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:24:12.243169    4318 provision.go:87] duration metric: took 210.132957ms to configureAuth
	I0917 10:24:12.243183    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:24:12.243371    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:12.243385    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:12.243518    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.243598    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.243687    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.243761    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.243855    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.243970    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.244103    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.244110    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:24:12.301530    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:24:12.301541    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:24:12.301620    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:24:12.301632    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.301763    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.301869    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.301966    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.302040    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.302167    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.302303    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.302348    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:24:12.370095    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:24:12.370113    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.370241    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.370333    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.370424    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.370523    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.370657    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.370794    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.370805    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:24:14.004628    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:24:14.004644    4318 machine.go:96] duration metric: took 15.190455794s to provisionDockerMachine
	I0917 10:24:14.004650    4318 start.go:293] postStartSetup for "ha-744000-m03" (driver="hyperkit")
	I0917 10:24:14.004657    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:24:14.004672    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.004878    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:24:14.004901    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.005017    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.005138    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.005237    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.005322    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.044460    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:24:14.048554    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:24:14.048568    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:24:14.048680    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:24:14.048820    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:24:14.048826    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:24:14.048988    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:24:14.057354    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:24:14.088743    4318 start.go:296] duration metric: took 84.082897ms for postStartSetup
	I0917 10:24:14.088765    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.088958    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:24:14.088972    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.089062    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.089149    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.089239    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.089326    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.124314    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:24:14.124387    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:24:14.177086    4318 fix.go:56] duration metric: took 15.522482042s for fixHost
	I0917 10:24:14.177117    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.177268    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.177375    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.177470    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.177560    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.177699    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:14.177847    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:14.177855    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:24:14.235217    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593854.127008624
	
	I0917 10:24:14.235235    4318 fix.go:216] guest clock: 1726593854.127008624
	I0917 10:24:14.235240    4318 fix.go:229] Guest: 2024-09-17 10:24:14.127008624 -0700 PDT Remote: 2024-09-17 10:24:14.177103 -0700 PDT m=+69.833227660 (delta=-50.094376ms)
	I0917 10:24:14.235251    4318 fix.go:200] guest clock delta is within tolerance: -50.094376ms
	I0917 10:24:14.235255    4318 start.go:83] releasing machines lock for "ha-744000-m03", held for 15.580676894s
	I0917 10:24:14.235272    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.235402    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:14.257745    4318 out.go:177] * Found network options:
	I0917 10:24:14.279018    4318 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0917 10:24:14.300830    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:24:14.300855    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:24:14.300870    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301356    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301486    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301594    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:24:14.301623    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	W0917 10:24:14.301663    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:24:14.301685    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:24:14.301770    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:24:14.301785    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.301824    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.301934    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.301945    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.302070    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.302137    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.302238    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.302321    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.302438    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	W0917 10:24:14.334246    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:24:14.334313    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:24:14.380907    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:24:14.380924    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:24:14.381008    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:24:14.397032    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:24:14.406169    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:24:14.415306    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:24:14.415369    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:24:14.424550    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:24:14.435946    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:24:14.448076    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:24:14.457027    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:24:14.466527    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:24:14.475918    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:24:14.484801    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:24:14.494039    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:24:14.502344    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:24:14.510724    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:14.608373    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:24:14.627463    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:24:14.627552    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:24:14.644673    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:24:14.657243    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:24:14.675019    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:24:14.686098    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:24:14.697382    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:24:14.722583    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:24:14.734058    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:24:14.749179    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:24:14.752033    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:24:14.760199    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:24:14.773743    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:24:14.866897    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:24:14.972459    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:24:14.972482    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:24:14.986205    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:15.081962    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:24:17.363023    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.281026419s)
	I0917 10:24:17.363099    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:24:17.373222    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:24:17.386396    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:24:17.397093    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:24:17.488832    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:24:17.603916    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:17.712002    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:24:17.725875    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:24:17.737346    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:17.846138    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:24:17.910308    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:24:17.910400    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:24:17.914917    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:24:17.914984    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:24:17.918153    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:24:17.947145    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:24:17.947245    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:24:17.963719    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:24:18.000615    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:24:18.042227    4318 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 10:24:18.063289    4318 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 10:24:18.084167    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:18.084404    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:24:18.087640    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:24:18.098050    4318 mustload.go:65] Loading cluster: ha-744000
	I0917 10:24:18.098230    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:18.098462    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:18.098484    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:18.107325    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51975
	I0917 10:24:18.107666    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:18.108009    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:18.108026    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:18.108255    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:18.108371    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:24:18.108467    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:18.108528    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:24:18.109600    4318 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:24:18.109898    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:18.109929    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:18.118725    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51977
	I0917 10:24:18.119073    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:18.119409    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:18.119421    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:18.119635    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:18.119739    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:24:18.119820    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.7
	I0917 10:24:18.119829    4318 certs.go:194] generating shared ca certs ...
	I0917 10:24:18.119841    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:24:18.119995    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:24:18.120047    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:24:18.120060    4318 certs.go:256] generating profile certs ...
	I0917 10:24:18.120159    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:24:18.120243    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.2fbb59ab
	I0917 10:24:18.120301    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:24:18.120308    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:24:18.120350    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:24:18.120376    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:24:18.120395    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:24:18.120412    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:24:18.120438    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:24:18.120458    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:24:18.120476    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:24:18.120563    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:24:18.120603    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:24:18.120612    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:24:18.120645    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:24:18.120678    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:24:18.120708    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:24:18.120780    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:24:18.120814    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.120834    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.120851    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.120877    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:24:18.120957    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:24:18.121043    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:24:18.121130    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:24:18.121202    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:24:18.147236    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 10:24:18.150493    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 10:24:18.158955    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 10:24:18.162129    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 10:24:18.169902    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 10:24:18.173023    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 10:24:18.181042    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 10:24:18.184431    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 10:24:18.192679    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 10:24:18.195793    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 10:24:18.203953    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 10:24:18.207044    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 10:24:18.215067    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:24:18.235596    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:24:18.255384    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:24:18.274936    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:24:18.294598    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:24:18.314207    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:24:18.333653    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:24:18.352964    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:24:18.372887    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:24:18.392444    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:24:18.412080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:24:18.431948    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 10:24:18.445500    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 10:24:18.459362    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 10:24:18.473399    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 10:24:18.487272    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 10:24:18.501703    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 10:24:18.515561    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 10:24:18.529533    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:24:18.533858    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:24:18.543223    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.546597    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.546657    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.550937    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:24:18.560220    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:24:18.569425    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.572837    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.572891    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.577272    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:24:18.586607    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:24:18.596344    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.600052    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.600113    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.604520    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:24:18.614023    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:24:18.617509    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:24:18.621851    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:24:18.626160    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:24:18.630354    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:24:18.634589    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:24:18.638973    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:24:18.643298    4318 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.1 docker true true} ...
	I0917 10:24:18.643362    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:24:18.643382    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:24:18.643427    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:24:18.656418    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:24:18.656455    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:24:18.656516    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:24:18.665097    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:24:18.665163    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 10:24:18.673393    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 10:24:18.687079    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:24:18.701092    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:24:18.714815    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:24:18.717763    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:24:18.727902    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:18.829461    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:24:18.842084    4318 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:24:18.842275    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:18.863032    4318 out.go:177] * Verifying Kubernetes components...
	I0917 10:24:18.883865    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:18.998710    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:24:19.010018    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:24:19.010220    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 10:24:19.010257    4318 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 10:24:19.010447    4318 node_ready.go:35] waiting up to 6m0s for node "ha-744000-m03" to be "Ready" ...
	I0917 10:24:19.010490    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.010495    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.010502    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.010506    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.012607    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.012878    4318 node_ready.go:49] node "ha-744000-m03" has status "Ready":"True"
	I0917 10:24:19.012890    4318 node_ready.go:38] duration metric: took 2.431907ms for node "ha-744000-m03" to be "Ready" ...
	I0917 10:24:19.012896    4318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:24:19.012942    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:19.012948    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.012953    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.012957    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.016637    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:19.021780    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.021832    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j9jcc
	I0917 10:24:19.021838    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.021845    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.021849    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.023987    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.024523    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.024531    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.024537    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.024540    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.026255    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.026592    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.026602    4318 pod_ready.go:82] duration metric: took 4.810235ms for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.026609    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.026651    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-khnlh
	I0917 10:24:19.026656    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.026661    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.026665    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.028592    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.029028    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.029035    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.029041    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.029046    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.031043    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.031318    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.031326    4318 pod_ready.go:82] duration metric: took 4.71115ms for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.031340    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.031385    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000
	I0917 10:24:19.031390    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.031395    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.031400    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.033205    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.033583    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.033590    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.033596    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.033600    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.035534    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.035980    4318 pod_ready.go:93] pod "etcd-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.035990    4318 pod_ready.go:82] duration metric: took 4.645198ms for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.035996    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.036034    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m02
	I0917 10:24:19.036039    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.036044    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.036047    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.038093    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.038513    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:19.038520    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.038526    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.038529    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.040485    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.041086    4318 pod_ready.go:93] pod "etcd-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.041096    4318 pod_ready.go:82] duration metric: took 5.095487ms for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.041103    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.210917    4318 request.go:632] Waited for 169.774559ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:24:19.210994    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:24:19.211005    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.211012    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.211017    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.219188    4318 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 10:24:19.410612    4318 request.go:632] Waited for 190.84697ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.410658    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.410668    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.410679    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.410688    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.427654    4318 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0917 10:24:19.428047    4318 pod_ready.go:93] pod "etcd-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.428057    4318 pod_ready.go:82] duration metric: took 386.946972ms for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.428069    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.611188    4318 request.go:632] Waited for 183.076824ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:24:19.611240    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:24:19.611249    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.611257    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.611264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.622189    4318 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0917 10:24:19.811318    4318 request.go:632] Waited for 187.797206ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.811366    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.811407    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.811419    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.811426    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.823164    4318 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0917 10:24:19.823509    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.823520    4318 pod_ready.go:82] duration metric: took 395.442485ms for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.823528    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.010832    4318 request.go:632] Waited for 187.259959ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:24:20.010872    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:24:20.010876    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.010913    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.010919    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.016809    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:20.210576    4318 request.go:632] Waited for 193.290597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:20.210656    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:20.210663    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.210675    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.210681    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.241143    4318 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0917 10:24:20.242017    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:20.242029    4318 pod_ready.go:82] duration metric: took 418.492753ms for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.242037    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.412058    4318 request.go:632] Waited for 169.980212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.412108    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.412115    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.412119    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.412124    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.426145    4318 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0917 10:24:20.611816    4318 request.go:632] Waited for 184.70602ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:20.611860    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:20.611919    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.611928    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.611934    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.620369    4318 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 10:24:20.811031    4318 request.go:632] Waited for 68.064136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.811067    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.811073    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.811120    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.811130    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.814429    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:21.010914    4318 request.go:632] Waited for 195.866244ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.010969    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.010976    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.010982    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.010986    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.013773    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.243275    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:21.243312    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.243339    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.243347    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.246247    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.411834    4318 request.go:632] Waited for 165.11515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.411870    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.411880    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.411906    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.411911    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.414456    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.742665    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:21.742680    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.742687    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.742691    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.745790    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:21.812507    4318 request.go:632] Waited for 66.156229ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.812582    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.812590    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.812600    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.812608    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.820287    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:24:22.242306    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:22.242320    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.242327    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.242331    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.244398    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.244874    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:22.244882    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.244888    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.244892    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.246990    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.247323    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:22.742294    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:22.742306    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.742313    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.742316    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.744814    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.745729    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:22.745740    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.745748    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.745751    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.748226    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.242342    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:23.242353    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.242359    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.242363    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.244374    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.244841    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:23.244851    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.244856    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.244861    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.246650    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:23.742870    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:23.742914    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.742924    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.742931    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.745627    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.746052    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:23.746060    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.746065    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.746068    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.747609    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.242218    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:24.242231    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.242238    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.242242    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.244278    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:24.244830    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:24.244840    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.244846    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.244849    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.246617    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.743710    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:24.743732    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.743767    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.743774    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.746703    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:24.747074    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:24.747081    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.747086    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.747091    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.748857    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.749268    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:25.243132    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:25.243162    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.243175    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.243182    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.246637    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:25.247243    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:25.247251    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.247257    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.247261    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.248791    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:25.743144    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:25.743185    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.743194    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.743200    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.745534    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:25.746096    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:25.746104    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.746110    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.746114    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.747777    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:26.243397    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:26.243422    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.243434    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.243439    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.246724    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:26.247251    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:26.247258    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.247264    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.247267    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.248850    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:26.743796    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:26.743812    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.743818    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.743822    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.746038    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:26.746535    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:26.746543    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.746548    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.746552    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.748223    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:27.243865    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:27.243907    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.243915    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.243921    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.246152    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:27.246675    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:27.246682    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.246690    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.246694    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.248406    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:27.248807    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:27.743171    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:27.743187    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.743194    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.743198    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.745500    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:27.745988    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:27.745997    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.746002    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.746006    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.748595    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:28.242282    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:28.242301    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.242313    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.242319    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.245501    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:28.246247    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:28.246255    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.246261    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.246264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.247902    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:28.743212    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:28.743236    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.743249    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.743260    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.746405    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:28.747013    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:28.747024    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.747033    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.747036    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.748962    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:29.242696    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:29.242721    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.242759    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.242768    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.246203    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:29.246735    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:29.246743    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.246748    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.246751    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.248540    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:29.248873    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:29.742874    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:29.742909    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.742916    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.742920    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.745853    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:29.746241    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:29.746248    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.746254    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.746258    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.747886    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:30.242344    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:30.242398    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.242412    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.242417    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.245482    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:30.246231    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:30.246239    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.246243    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.246249    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.247931    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:30.743687    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:30.743739    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.743748    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.743754    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.746284    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:30.746897    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:30.746904    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.746910    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.746919    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.748657    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.242762    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:31.242802    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.242815    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.242821    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.244879    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:31.245288    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:31.245296    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.245302    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.245305    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.246940    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.744167    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:31.744190    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.744201    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.744210    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.747694    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:31.748330    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:31.748354    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.748359    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.748363    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.750021    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.750280    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:32.243257    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:32.243276    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.243287    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.243295    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.246666    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:32.247294    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:32.247301    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.247307    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.247315    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.249071    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:32.742445    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:32.742465    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.742477    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.742486    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.745063    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:32.745573    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:32.745581    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.745586    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.745590    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.747244    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.242932    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:33.242948    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.242957    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.242960    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.245698    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:33.246162    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.246170    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.246176    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.246180    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.248030    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.743607    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:33.743630    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.743677    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.743686    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.747091    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:33.747696    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.747706    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.747715    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.747721    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.749482    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.749881    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.749891    4318 pod_ready.go:82] duration metric: took 13.507764282s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.749898    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.749929    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:24:33.749934    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.749939    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.749944    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.751607    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.752009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:33.752016    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.752022    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.752026    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.753479    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.753776    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.753784    4318 pod_ready.go:82] duration metric: took 3.88171ms for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.753790    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.753823    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:24:33.753827    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.753833    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.753838    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.755454    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.755911    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:33.755918    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.755924    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.755927    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.757319    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.757679    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.757688    4318 pod_ready.go:82] duration metric: took 3.892056ms for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.757694    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.757728    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:24:33.757735    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.757741    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.757744    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.759325    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.759692    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.759699    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.759705    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.759708    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.761363    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.761694    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.761703    4318 pod_ready.go:82] duration metric: took 4.003379ms for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.761709    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.761744    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:24:33.761749    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.761754    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.761759    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.763321    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.763721    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:24:33.763727    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.763733    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.763737    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.765414    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.765712    4318 pod_ready.go:93] pod "kube-proxy-66bkb" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.765720    4318 pod_ready.go:82] duration metric: took 4.007111ms for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.765726    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.944183    4318 request.go:632] Waited for 178.404523ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:24:33.944229    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:24:33.944237    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.944268    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.944273    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.946730    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.143628    4318 request.go:632] Waited for 196.302632ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:34.143662    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:34.143667    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.143673    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.143676    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.145586    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:34.145943    4318 pod_ready.go:93] pod "kube-proxy-6xd2h" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.145952    4318 pod_ready.go:82] duration metric: took 380.218476ms for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.145958    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.343736    4318 request.go:632] Waited for 197.699564ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:24:34.343783    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:24:34.343789    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.343820    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.343834    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.346285    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.544565    4318 request.go:632] Waited for 197.654167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:34.544605    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:34.544613    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.544621    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.544627    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.547228    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.547536    4318 pod_ready.go:93] pod "kube-proxy-c5xbc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.547544    4318 pod_ready.go:82] duration metric: took 401.579042ms for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.547551    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.745694    4318 request.go:632] Waited for 198.04491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:24:34.745741    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:24:34.745751    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.745761    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.745768    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.749007    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:34.944446    4318 request.go:632] Waited for 194.709353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:34.944508    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:34.944519    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.944530    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.944538    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.948023    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:34.948529    4318 pod_ready.go:93] pod "kube-proxy-k9xsp" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.948539    4318 pod_ready.go:82] duration metric: took 400.98043ms for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.948546    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.144352    4318 request.go:632] Waited for 195.670277ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:24:35.144418    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:24:35.144427    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.144435    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.144444    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.148047    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:35.345672    4318 request.go:632] Waited for 197.054602ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:35.345814    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:35.345826    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.345837    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.345847    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.350008    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:24:35.350440    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:35.350449    4318 pod_ready.go:82] duration metric: took 401.89555ms for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.350455    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.545736    4318 request.go:632] Waited for 195.218553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:24:35.545818    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:24:35.545826    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.545834    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.545838    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.548444    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:35.743956    4318 request.go:632] Waited for 195.068268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:35.744009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:35.744018    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.744069    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.744076    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.747579    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:35.748084    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:35.748097    4318 pod_ready.go:82] duration metric: took 397.633311ms for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.748105    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.943849    4318 request.go:632] Waited for 195.677443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:35.943994    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:35.944005    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.944016    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.944023    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.947546    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.144032    4318 request.go:632] Waited for 195.696928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.144124    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.144136    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.144152    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.144160    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.147113    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:36.344824    4318 request.go:632] Waited for 96.483405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.344983    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.344994    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.345004    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.345015    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.348529    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.544910    4318 request.go:632] Waited for 195.649777ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.545008    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.545020    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.545031    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.545037    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.548104    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.748291    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.748355    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.748369    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.748376    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.751622    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.945151    4318 request.go:632] Waited for 192.867405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.945191    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.945197    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.945223    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.945245    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.948349    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.249285    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:37.249335    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.249350    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.249356    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.252559    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.344915    4318 request.go:632] Waited for 91.666148ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:37.345009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:37.345019    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.345029    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.345039    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.348586    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.348906    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:37.348918    4318 pod_ready.go:82] duration metric: took 1.600795502s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:37.348928    4318 pod_ready.go:39] duration metric: took 18.335907637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:24:37.348941    4318 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:24:37.349014    4318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:24:37.361991    4318 api_server.go:72] duration metric: took 18.519766947s to wait for apiserver process to appear ...
	I0917 10:24:37.362004    4318 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:24:37.362016    4318 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 10:24:37.365142    4318 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 10:24:37.365173    4318 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 10:24:37.365178    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.365184    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.365188    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.365770    4318 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 10:24:37.365800    4318 api_server.go:141] control plane version: v1.31.1
	I0917 10:24:37.365807    4318 api_server.go:131] duration metric: took 3.798093ms to wait for apiserver health ...
	I0917 10:24:37.365812    4318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 10:24:37.544057    4318 request.go:632] Waited for 178.188238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.544191    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.544207    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.544224    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.544234    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.549291    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:37.554725    4318 system_pods.go:59] 26 kube-system pods found
	I0917 10:24:37.554740    4318 system_pods.go:61] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.554746    4318 system_pods.go:61] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.554752    4318 system_pods.go:61] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:24:37.554756    4318 system_pods.go:61] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running
	I0917 10:24:37.554759    4318 system_pods.go:61] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:24:37.554761    4318 system_pods.go:61] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:24:37.554764    4318 system_pods.go:61] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:24:37.554769    4318 system_pods.go:61] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running
	I0917 10:24:37.554772    4318 system_pods.go:61] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:24:37.554774    4318 system_pods.go:61] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:24:37.554778    4318 system_pods.go:61] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running
	I0917 10:24:37.554781    4318 system_pods.go:61] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:24:37.554784    4318 system_pods.go:61] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:24:37.554787    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running
	I0917 10:24:37.554791    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:24:37.554794    4318 system_pods.go:61] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:24:37.554797    4318 system_pods.go:61] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:24:37.554800    4318 system_pods.go:61] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:24:37.554802    4318 system_pods.go:61] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running
	I0917 10:24:37.554805    4318 system_pods.go:61] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:24:37.554808    4318 system_pods.go:61] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running
	I0917 10:24:37.554811    4318 system_pods.go:61] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:24:37.554813    4318 system_pods.go:61] "kube-vip-ha-744000" [bcb8c990-8b77-4e1d-bf96-614e9da8bf60] Running
	I0917 10:24:37.554816    4318 system_pods.go:61] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:24:37.554818    4318 system_pods.go:61] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:24:37.554821    4318 system_pods.go:61] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:24:37.554825    4318 system_pods.go:74] duration metric: took 189.008209ms to wait for pod list to return data ...
	I0917 10:24:37.554830    4318 default_sa.go:34] waiting for default service account to be created ...
	I0917 10:24:37.744848    4318 request.go:632] Waited for 189.951036ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:24:37.744937    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:24:37.744950    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.744962    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.744968    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.748818    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.748898    4318 default_sa.go:45] found service account: "default"
	I0917 10:24:37.748910    4318 default_sa.go:55] duration metric: took 194.07297ms for default service account to be created ...
	I0917 10:24:37.748917    4318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 10:24:37.945360    4318 request.go:632] Waited for 196.381657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.945493    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.945504    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.945515    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.945524    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.951048    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:37.956873    4318 system_pods.go:86] 26 kube-system pods found
	I0917 10:24:37.956886    4318 system_pods.go:89] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.956893    4318 system_pods.go:89] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.956898    4318 system_pods.go:89] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:24:37.956901    4318 system_pods.go:89] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running
	I0917 10:24:37.956905    4318 system_pods.go:89] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:24:37.956908    4318 system_pods.go:89] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:24:37.956910    4318 system_pods.go:89] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:24:37.956915    4318 system_pods.go:89] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running
	I0917 10:24:37.956918    4318 system_pods.go:89] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:24:37.956921    4318 system_pods.go:89] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:24:37.956927    4318 system_pods.go:89] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running
	I0917 10:24:37.956931    4318 system_pods.go:89] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:24:37.956933    4318 system_pods.go:89] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:24:37.956939    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running
	I0917 10:24:37.956943    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:24:37.956945    4318 system_pods.go:89] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:24:37.956948    4318 system_pods.go:89] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:24:37.956951    4318 system_pods.go:89] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:24:37.956954    4318 system_pods.go:89] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running
	I0917 10:24:37.956957    4318 system_pods.go:89] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:24:37.956960    4318 system_pods.go:89] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running
	I0917 10:24:37.956962    4318 system_pods.go:89] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:24:37.956966    4318 system_pods.go:89] "kube-vip-ha-744000" [bcb8c990-8b77-4e1d-bf96-614e9da8bf60] Running
	I0917 10:24:37.956968    4318 system_pods.go:89] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:24:37.956972    4318 system_pods.go:89] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:24:37.956975    4318 system_pods.go:89] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:24:37.956980    4318 system_pods.go:126] duration metric: took 208.057925ms to wait for k8s-apps to be running ...
	I0917 10:24:37.956985    4318 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 10:24:37.957044    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:24:37.968066    4318 system_svc.go:56] duration metric: took 11.076755ms WaitForService to wait for kubelet
	I0917 10:24:37.968081    4318 kubeadm.go:582] duration metric: took 19.125854064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:24:37.968093    4318 node_conditions.go:102] verifying NodePressure condition ...
	I0917 10:24:38.144749    4318 request.go:632] Waited for 176.615288ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 10:24:38.144801    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 10:24:38.144806    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:38.144812    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:38.144819    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:38.147413    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:38.148237    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148247    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148254    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148257    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148261    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148265    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148268    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148271    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148274    4318 node_conditions.go:105] duration metric: took 180.176513ms to run NodePressure ...
	I0917 10:24:38.148284    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:24:38.148299    4318 start.go:255] writing updated cluster config ...
	I0917 10:24:38.170792    4318 out.go:201] 
	I0917 10:24:38.192139    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:38.192258    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.214598    4318 out.go:177] * Starting "ha-744000-m04" worker node in "ha-744000" cluster
	I0917 10:24:38.256637    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:24:38.256664    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:24:38.256839    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:24:38.256857    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:24:38.256981    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.257985    4318 start.go:360] acquireMachinesLock for ha-744000-m04: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:24:38.258078    4318 start.go:364] duration metric: took 72.145µs to acquireMachinesLock for "ha-744000-m04"
	I0917 10:24:38.258103    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:24:38.258112    4318 fix.go:54] fixHost starting: m04
	I0917 10:24:38.258540    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:38.258566    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:38.268106    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51981
	I0917 10:24:38.268448    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:38.268812    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:38.268827    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:38.269077    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:38.269188    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:24:38.269289    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetState
	I0917 10:24:38.269369    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.269469    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 3930
	I0917 10:24:38.270534    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid 3930 missing from process table
	I0917 10:24:38.270552    4318 fix.go:112] recreateIfNeeded on ha-744000-m04: state=Stopped err=<nil>
	I0917 10:24:38.270560    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	W0917 10:24:38.270638    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:24:38.291868    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m04" ...
	I0917 10:24:38.333636    4318 main.go:141] libmachine: (ha-744000-m04) Calling .Start
	I0917 10:24:38.333893    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.333997    4318 main.go:141] libmachine: (ha-744000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid
	I0917 10:24:38.334050    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Using UUID a75a0481-aaf0-49d3-9d6e-de3c56706456
	I0917 10:24:38.361417    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Generated MAC b6:cf:5d:a2:4f:b0
	I0917 10:24:38.361439    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:24:38.361574    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75a0481-aaf0-49d3-9d6e-de3c56706456", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f6270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:24:38.361608    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75a0481-aaf0-49d3-9d6e-de3c56706456", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f6270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:24:38.361683    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a75a0481-aaf0-49d3-9d6e-de3c56706456", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/ha-744000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:24:38.361733    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a75a0481-aaf0-49d3-9d6e-de3c56706456 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/ha-744000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:24:38.361747    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:24:38.363077    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Pid is 4356
	I0917 10:24:38.363455    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Attempt 0
	I0917 10:24:38.363472    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.363519    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 4356
	I0917 10:24:38.365806    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Searching for b6:cf:5d:a2:4f:b0 in /var/db/dhcpd_leases ...
	I0917 10:24:38.365879    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:24:38.365922    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0cb7}
	I0917 10:24:38.365937    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:24:38.365950    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:24:38.365959    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:24:38.365986    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Found match: b6:cf:5d:a2:4f:b0
	I0917 10:24:38.365994    4318 main.go:141] libmachine: (ha-744000-m04) DBG | IP: 192.169.0.8
	I0917 10:24:38.366035    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetConfigRaw
	I0917 10:24:38.366790    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:24:38.367002    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.367474    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:24:38.367487    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:24:38.367618    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:24:38.367733    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:24:38.367825    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:24:38.367932    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:24:38.368026    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:24:38.368135    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:38.368308    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:24:38.368315    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:24:38.371140    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:24:38.380744    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:24:38.381595    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:24:38.381618    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:24:38.381626    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:24:38.381634    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:24:38.766023    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:24:38.766038    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:24:38.880838    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:24:38.880856    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:24:38.880875    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:24:38.880896    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:24:38.881691    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:24:38.881699    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:24:44.498444    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:24:44.498459    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:24:44.498494    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:24:44.523076    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:25:13.428240    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:25:13.428258    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.428409    4318 buildroot.go:166] provisioning hostname "ha-744000-m04"
	I0917 10:25:13.428420    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.428514    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.428620    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.428723    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.428810    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.428889    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.429066    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.429209    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.429217    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m04 && echo "ha-744000-m04" | sudo tee /etc/hostname
	I0917 10:25:13.489074    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m04
	
	I0917 10:25:13.489089    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.489213    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.489306    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.489396    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.489496    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.489633    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.489780    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.489791    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:25:13.545140    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:25:13.545156    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:25:13.545164    4318 buildroot.go:174] setting up certificates
	I0917 10:25:13.545177    4318 provision.go:84] configureAuth start
	I0917 10:25:13.545184    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.545313    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:25:13.545408    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.545491    4318 provision.go:143] copyHostCerts
	I0917 10:25:13.545519    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:25:13.545566    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:25:13.545572    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:25:13.545709    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:25:13.545914    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:25:13.545947    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:25:13.545952    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:25:13.546020    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:25:13.546170    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:25:13.546203    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:25:13.546208    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:25:13.546273    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:25:13.546422    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m04 san=[127.0.0.1 192.169.0.8 ha-744000-m04 localhost minikube]
	I0917 10:25:13.728947    4318 provision.go:177] copyRemoteCerts
	I0917 10:25:13.729001    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:25:13.729019    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.729159    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.729267    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.729352    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.729436    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:13.760341    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:25:13.760415    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:25:13.780212    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:25:13.780295    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:25:13.799969    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:25:13.800048    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:25:13.820126    4318 provision.go:87] duration metric: took 274.938832ms to configureAuth
	I0917 10:25:13.820140    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:25:13.820316    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:25:13.820363    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:13.820492    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.820577    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.820675    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.820756    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.820822    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.820952    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.821086    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.821093    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:25:13.869340    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:25:13.869359    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:25:13.869441    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:25:13.869457    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.869595    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.869683    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.869771    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.869861    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.870006    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.870149    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.870194    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:25:13.929484    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:25:13.929501    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.929632    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.929718    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.929806    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.929887    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.930023    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.930160    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.930175    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:25:15.508327    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:25:15.508343    4318 machine.go:96] duration metric: took 37.140625742s to provisionDockerMachine
	I0917 10:25:15.508350    4318 start.go:293] postStartSetup for "ha-744000-m04" (driver="hyperkit")
	I0917 10:25:15.508359    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:25:15.508370    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.508567    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:25:15.508581    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.508684    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.508771    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.508863    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.508959    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.539960    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:25:15.543053    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:25:15.543063    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:25:15.543160    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:25:15.543298    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:25:15.543305    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:25:15.543461    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:25:15.551517    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:25:15.570767    4318 start.go:296] duration metric: took 62.406299ms for postStartSetup
	I0917 10:25:15.570789    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.570981    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:25:15.570995    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.571091    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.571171    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.571256    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.571333    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.602758    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:25:15.602836    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:25:15.637575    4318 fix.go:56] duration metric: took 37.37922575s for fixHost
	I0917 10:25:15.637622    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.637768    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.637924    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.638031    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.638176    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.638325    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:15.638471    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:15.638479    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:25:15.688928    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593915.722853111
	
	I0917 10:25:15.688940    4318 fix.go:216] guest clock: 1726593915.722853111
	I0917 10:25:15.688945    4318 fix.go:229] Guest: 2024-09-17 10:25:15.722853111 -0700 PDT Remote: 2024-09-17 10:25:15.63759 -0700 PDT m=+131.293327303 (delta=85.263111ms)
	I0917 10:25:15.688955    4318 fix.go:200] guest clock delta is within tolerance: 85.263111ms
	I0917 10:25:15.688959    4318 start.go:83] releasing machines lock for "ha-744000-m04", held for 37.430633857s
	I0917 10:25:15.688978    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.689103    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:25:15.710671    4318 out.go:177] * Found network options:
	I0917 10:25:15.731491    4318 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0917 10:25:15.753310    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.753333    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.753342    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:25:15.753356    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.753871    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.754022    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.754119    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:25:15.754146    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	W0917 10:25:15.754178    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.754208    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.754223    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:25:15.754296    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.754303    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:25:15.754334    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.754432    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.754453    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.754575    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.754604    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.754689    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.754711    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.754792    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	W0917 10:25:15.782647    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:25:15.782713    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:25:15.824742    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:25:15.824761    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:25:15.824849    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:25:15.840222    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:25:15.849242    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:25:15.858317    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:25:15.858387    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:25:15.867462    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:25:15.875738    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:25:15.884682    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:25:15.893510    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:25:15.902446    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:25:15.911295    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:25:15.919994    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:25:15.928900    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:25:15.936904    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:25:15.944894    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:25:16.041231    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:25:16.060721    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:25:16.060799    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:25:16.080747    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:25:16.095004    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:25:16.114244    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:25:16.125786    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:25:16.137258    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:25:16.158423    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:25:16.170393    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:25:16.185414    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:25:16.188334    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:25:16.196827    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:25:16.210659    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:25:16.305554    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:25:16.409957    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:25:16.409982    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:25:16.425083    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:25:16.535715    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:26:17.562416    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.026297453s)
	I0917 10:26:17.562497    4318 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:26:17.630222    4318 out.go:201] 
	W0917 10:26:17.651239    4318 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:25:13 ha-744000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.456528847Z" level=info msg="Starting up"
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.457229245Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.457756278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=515
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.475582216Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490758453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490898800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490976043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491011334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491152047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491195568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491328519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491366944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491397636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491431172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491542048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491732624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493310341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493359335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493488280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493534970Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493652714Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493714896Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494789743Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494871313Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494917161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494950579Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494983897Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495053063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495291226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495375682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495419457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495464742Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495500431Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495531945Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495563543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495597416Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495628537Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495658774Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495687956Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495720478Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495838245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495897691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495950377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495999910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496037282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496068360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496098684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496129402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496180048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496224888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496258746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496292925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496328738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496361060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496398155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496429539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496458278Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496532105Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496577809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496631209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496668767Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496701760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496732507Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496764331Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496955260Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497045520Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497161388Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497218646Z" level=info msg="containerd successfully booted in 0.022496s"
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.478225250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.497615871Z" level=info msg="Loading containers: start."
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.589404703Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.466302251Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.511791263Z" level=info msg="Loading containers: done."
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.521663721Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.521829028Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.541037196Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:25:15 ha-744000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.542461858Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:25:16 ha-744000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.587552960Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588424393Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588788736Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588860910Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588877844Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:25:17 ha-744000-m04 dockerd[1095]: time="2024-09-17T17:25:17.626813653Z" level=info msg="Starting up"
	Sep 17 17:26:17 ha-744000-m04 dockerd[1095]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:25:13 ha-744000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.456528847Z" level=info msg="Starting up"
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.457229245Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.457756278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=515
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.475582216Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490758453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490898800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490976043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491011334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491152047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491195568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491328519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491366944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491397636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491431172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491542048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491732624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493310341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493359335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493488280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493534970Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493652714Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493714896Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494789743Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494871313Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494917161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494950579Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494983897Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495053063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495291226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495375682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495419457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495464742Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495500431Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495531945Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495563543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495597416Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495628537Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495658774Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495687956Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495720478Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495838245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495897691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495950377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495999910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496037282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496068360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496098684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496129402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496180048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496224888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496258746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496292925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496328738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496361060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496398155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496429539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496458278Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496532105Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496577809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496631209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496668767Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496701760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496732507Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496764331Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496955260Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497045520Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497161388Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497218646Z" level=info msg="containerd successfully booted in 0.022496s"
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.478225250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.497615871Z" level=info msg="Loading containers: start."
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.589404703Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.466302251Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.511791263Z" level=info msg="Loading containers: done."
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.521663721Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.521829028Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.541037196Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:25:15 ha-744000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.542461858Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:25:16 ha-744000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.587552960Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588424393Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588788736Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588860910Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588877844Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:25:17 ha-744000-m04 dockerd[1095]: time="2024-09-17T17:25:17.626813653Z" level=info msg="Starting up"
	Sep 17 17:26:17 ha-744000-m04 dockerd[1095]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:26:17.651325    4318 out.go:270] * 
	* 
	W0917 10:26:17.652544    4318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:26:17.714012    4318 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-744000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-744000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 logs -n 25
E0917 10:26:19.974311    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 logs -n 25: (3.397104713s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m02:/home/docker/cp-test_ha-744000-m03_ha-744000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m02 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m03_ha-744000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m04 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp testdata/cp-test.txt                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000:/home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000 sudo cat                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m02:/home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m02 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03:/home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m03 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-744000 node stop m02 -v=7                                                                                                 | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-744000 node start m02 -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:22 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000 -v=7                                                                                                       | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-744000 -v=7                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT | 17 Sep 24 10:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:23 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 10:23:04
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 10:23:04.382852    4318 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:23:04.383033    4318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:23:04.383038    4318 out.go:358] Setting ErrFile to fd 2...
	I0917 10:23:04.383042    4318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:23:04.383233    4318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:23:04.384637    4318 out.go:352] Setting JSON to false
	I0917 10:23:04.410020    4318 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3151,"bootTime":1726590633,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:23:04.410173    4318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:23:04.431516    4318 out.go:177] * [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:23:04.474507    4318 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:23:04.474563    4318 notify.go:220] Checking for updates...
	I0917 10:23:04.517356    4318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:04.538348    4318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:23:04.559339    4318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:23:04.580471    4318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:23:04.622325    4318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:23:04.644148    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:04.644323    4318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:23:04.645084    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.645147    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:04.654766    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51897
	I0917 10:23:04.655119    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:04.655514    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:04.655526    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:04.655751    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:04.655871    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.684288    4318 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:23:04.726365    4318 start.go:297] selected driver: hyperkit
	I0917 10:23:04.726395    4318 start.go:901] validating driver "hyperkit" against &{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:04.726649    4318 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:23:04.726838    4318 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:23:04.727063    4318 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:23:04.736820    4318 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:23:04.742830    4318 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.742852    4318 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:23:04.746401    4318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:23:04.746441    4318 cni.go:84] Creating CNI manager for ""
	I0917 10:23:04.746483    4318 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 10:23:04.746565    4318 start.go:340] cluster config:
	{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:04.746687    4318 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:23:04.789252    4318 out.go:177] * Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	I0917 10:23:04.810326    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:04.810440    4318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:23:04.810514    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:04.810708    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:04.810727    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:04.810905    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:04.811872    4318 start.go:360] acquireMachinesLock for ha-744000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:04.811982    4318 start.go:364] duration metric: took 85.186µs to acquireMachinesLock for "ha-744000"
	I0917 10:23:04.812017    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:04.812036    4318 fix.go:54] fixHost starting: 
	I0917 10:23:04.812477    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.812504    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:04.821489    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51899
	I0917 10:23:04.821836    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:04.822180    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:04.822195    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:04.822406    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:04.822525    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.822647    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:23:04.822729    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.822838    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 3812
	I0917 10:23:04.823848    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 3812 missing from process table
	I0917 10:23:04.823907    4318 fix.go:112] recreateIfNeeded on ha-744000: state=Stopped err=<nil>
	I0917 10:23:04.823932    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	W0917 10:23:04.824033    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:04.845116    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000" ...
	I0917 10:23:04.866254    4318 main.go:141] libmachine: (ha-744000) Calling .Start
	I0917 10:23:04.866533    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.866553    4318 main.go:141] libmachine: (ha-744000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid
	I0917 10:23:04.868308    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 3812 missing from process table
	I0917 10:23:04.868320    4318 main.go:141] libmachine: (ha-744000) DBG | pid 3812 is in state "Stopped"
	I0917 10:23:04.868338    4318 main.go:141] libmachine: (ha-744000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid...
	I0917 10:23:04.868639    4318 main.go:141] libmachine: (ha-744000) DBG | Using UUID bcb5b96f-4d12-41bd-81db-c015832629bb
	I0917 10:23:04.980045    4318 main.go:141] libmachine: (ha-744000) DBG | Generated MAC 36:e3:93:ff:24:96
	I0917 10:23:04.980073    4318 main.go:141] libmachine: (ha-744000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:04.980180    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfce0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:04.980209    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfce0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:04.980265    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bcb5b96f-4d12-41bd-81db-c015832629bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:04.980311    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bcb5b96f-4d12-41bd-81db-c015832629bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:04.980327    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:04.981797    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Pid is 4331
	I0917 10:23:04.982233    4318 main.go:141] libmachine: (ha-744000) DBG | Attempt 0
	I0917 10:23:04.982246    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.982323    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:23:04.983974    4318 main.go:141] libmachine: (ha-744000) DBG | Searching for 36:e3:93:ff:24:96 in /var/db/dhcpd_leases ...
	I0917 10:23:04.984040    4318 main.go:141] libmachine: (ha-744000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:04.984071    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:04.984087    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c3c}
	I0917 10:23:04.984115    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0ba8}
	I0917 10:23:04.984133    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0b36}
	I0917 10:23:04.984146    4318 main.go:141] libmachine: (ha-744000) DBG | Found match: 36:e3:93:ff:24:96
	I0917 10:23:04.984156    4318 main.go:141] libmachine: (ha-744000) DBG | IP: 192.169.0.5
	I0917 10:23:04.984188    4318 main.go:141] libmachine: (ha-744000) Calling .GetConfigRaw
	I0917 10:23:04.984817    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:04.984996    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:04.985438    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:04.985457    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.985603    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:04.985698    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:04.985789    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:04.985886    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:04.985975    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:04.986095    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:04.986288    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:04.986295    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:04.989700    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:05.044525    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:05.045631    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:05.045647    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:05.045654    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:05.045662    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:05.426657    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:05.426678    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:05.541316    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:05.541359    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:05.541371    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:05.541450    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:05.542317    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:05.542326    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:23:11.152568    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:23:11.152612    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:23:11.152621    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:23:11.176948    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:23:14.298215    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0917 10:23:17.357957    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:23:17.357984    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.358136    4318 buildroot.go:166] provisioning hostname "ha-744000"
	I0917 10:23:17.358148    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.358261    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.358357    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.358444    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.358547    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.358661    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.358802    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.358948    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.358957    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000 && echo "ha-744000" | sudo tee /etc/hostname
	I0917 10:23:17.423407    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000
	
	I0917 10:23:17.423427    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.423563    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.423676    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.423778    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.423878    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.424023    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.424163    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.424174    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:23:17.486445    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:23:17.486467    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:23:17.486482    4318 buildroot.go:174] setting up certificates
	I0917 10:23:17.486490    4318 provision.go:84] configureAuth start
	I0917 10:23:17.486499    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.486623    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:17.486725    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.486807    4318 provision.go:143] copyHostCerts
	I0917 10:23:17.486836    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:17.486889    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:23:17.486897    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:17.487028    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:23:17.487256    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:17.487285    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:23:17.487290    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:17.487357    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:23:17.487493    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:17.487527    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:23:17.487531    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:17.487595    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:23:17.487731    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000 san=[127.0.0.1 192.169.0.5 ha-744000 localhost minikube]
	I0917 10:23:17.613185    4318 provision.go:177] copyRemoteCerts
	I0917 10:23:17.613267    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:23:17.613292    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.613443    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.613545    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.613632    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.613733    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:17.649429    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:23:17.649501    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:23:17.668769    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:23:17.668834    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 10:23:17.688500    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:23:17.688567    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:23:17.707535    4318 provision.go:87] duration metric: took 221.030078ms to configureAuth
	I0917 10:23:17.707546    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:23:17.707708    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:17.707721    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:17.707852    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.707942    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.708031    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.708110    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.708196    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.708323    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.708452    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.708459    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:23:17.762984    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:23:17.762996    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:23:17.763071    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:23:17.763083    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.763221    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.763321    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.763414    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.763501    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.763654    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.763786    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.763831    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:23:17.831028    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:23:17.831050    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.831198    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.831285    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.831382    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.831474    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.831619    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.831766    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.831778    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:23:19.502053    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:23:19.502067    4318 machine.go:96] duration metric: took 14.516529187s to provisionDockerMachine
	I0917 10:23:19.502080    4318 start.go:293] postStartSetup for "ha-744000" (driver="hyperkit")
	I0917 10:23:19.502098    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:23:19.502109    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.502292    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:23:19.502308    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.502398    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.502495    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.502582    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.502683    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.538092    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:23:19.544386    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:23:19.544403    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:23:19.544498    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:23:19.544649    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:23:19.544655    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:23:19.544826    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:23:19.556994    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:19.591561    4318 start.go:296] duration metric: took 89.471125ms for postStartSetup
	I0917 10:23:19.591589    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.591778    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:23:19.591792    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.591890    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.591986    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.592094    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.592189    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.628129    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:23:19.628204    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:23:19.683042    4318 fix.go:56] duration metric: took 14.870917903s for fixHost
	I0917 10:23:19.683065    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.683198    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.683290    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.683390    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.683480    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.683627    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:19.683766    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:19.683773    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:23:19.738877    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593799.774557135
	
	I0917 10:23:19.738891    4318 fix.go:216] guest clock: 1726593799.774557135
	I0917 10:23:19.738896    4318 fix.go:229] Guest: 2024-09-17 10:23:19.774557135 -0700 PDT Remote: 2024-09-17 10:23:19.683055 -0700 PDT m=+15.339523666 (delta=91.502135ms)
	I0917 10:23:19.738917    4318 fix.go:200] guest clock delta is within tolerance: 91.502135ms
	I0917 10:23:19.738921    4318 start.go:83] releasing machines lock for "ha-744000", held for 14.926834615s
	I0917 10:23:19.738935    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739067    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:19.739167    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739471    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739568    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739641    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:23:19.739673    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.739721    4318 ssh_runner.go:195] Run: cat /version.json
	I0917 10:23:19.739736    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.739766    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.739840    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.739856    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.739947    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.739962    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.740048    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.740062    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.740142    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.774171    4318 ssh_runner.go:195] Run: systemctl --version
	I0917 10:23:19.817235    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:23:19.822623    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:23:19.822678    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:23:19.837890    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:23:19.837904    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:19.838006    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:19.853023    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:23:19.862093    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:23:19.871068    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:23:19.871113    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:23:19.879912    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:19.888688    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:23:19.897529    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:19.906364    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:23:19.915519    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:23:19.924345    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:23:19.933204    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:23:19.942066    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:23:19.950115    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:23:19.958120    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:20.050394    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:23:20.067714    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:20.067803    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:23:20.081564    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:20.097350    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:23:20.111548    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:20.122410    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:20.132513    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:23:20.154104    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:20.164678    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:20.179449    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:23:20.182399    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:23:20.189403    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:23:20.202719    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:23:20.301120    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:23:20.410774    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:23:20.410853    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:23:20.425592    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:20.533399    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:23:22.845501    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31206782s)
	I0917 10:23:22.845569    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:23:22.857323    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:23:22.872057    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:22.882229    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:23:22.972546    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:23:23.076325    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.190977    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:23:23.204628    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:23.215649    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.315122    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:23:23.379549    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:23:23.379639    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:23:23.384126    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:23:23.384195    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:23:23.387269    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:23:23.412842    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:23:23.412931    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:23.429633    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:23.488622    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:23:23.488658    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:23.488993    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:23:23.492752    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:23.502567    4318 kubeadm.go:883] updating cluster {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 10:23:23.502656    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:23.502726    4318 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:23:23.518379    4318 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:23:23.518391    4318 docker.go:615] Images already preloaded, skipping extraction
	I0917 10:23:23.518479    4318 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:23:23.534156    4318 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:23:23.534175    4318 cache_images.go:84] Images are preloaded, skipping loading
	I0917 10:23:23.534195    4318 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 10:23:23.534287    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:23:23.534379    4318 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:23:23.569331    4318 cni.go:84] Creating CNI manager for ""
	I0917 10:23:23.569343    4318 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 10:23:23.569361    4318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:23:23.569378    4318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-744000 NodeName:ha-744000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:23:23.569456    4318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-744000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:23:23.569470    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:23:23.569527    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:23:23.582869    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:23:23.582932    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:23:23.582986    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:23:23.591650    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:23:23.591706    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 10:23:23.600248    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 10:23:23.613597    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:23:23.626900    4318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 10:23:23.640890    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:23:23.654403    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:23:23.657129    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:23.666988    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.767317    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:23.779290    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.5
	I0917 10:23:23.779301    4318 certs.go:194] generating shared ca certs ...
	I0917 10:23:23.779311    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.779465    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:23:23.779530    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:23:23.779541    4318 certs.go:256] generating profile certs ...
	I0917 10:23:23.779629    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:23:23.779650    4318 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17
	I0917 10:23:23.779666    4318 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0917 10:23:23.841071    4318 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 ...
	I0917 10:23:23.841087    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17: {Name:mkab82f9fd921972a929c6516cc39a0a941fac49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.841637    4318 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17 ...
	I0917 10:23:23.841647    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17: {Name:mke24af4c0eaf07f776b7fe40f78c9c251937399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.841917    4318 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt
	I0917 10:23:23.842125    4318 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key
	I0917 10:23:23.842361    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:23:23.842370    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:23:23.842393    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:23:23.842415    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:23:23.842434    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:23:23.842453    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:23:23.842471    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:23:23.842488    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:23:23.842505    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:23:23.842587    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:23:23.842622    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:23:23.842630    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:23:23.842662    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:23:23.842691    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:23:23.842724    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:23:23.842794    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:23.842828    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:23:23.842858    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:23.842876    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:23:23.843373    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:23:23.870080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:23:23.894949    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:23:23.914532    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:23:23.943260    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:23:23.966311    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:23:23.996612    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:23:24.032495    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:23:24.071443    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:23:24.109203    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:23:24.145982    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:23:24.196620    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:23:24.212031    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:23:24.216442    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:23:24.225794    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.229210    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.229255    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.233534    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:23:24.242685    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:23:24.251758    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.255864    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.255908    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.260126    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:23:24.269138    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:23:24.278092    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.281460    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.281501    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.285770    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:23:24.294687    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:23:24.298152    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:23:24.302803    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:23:24.307168    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:23:24.311812    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:23:24.316345    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:23:24.320697    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:23:24.325019    4318 kubeadm.go:392] StartCluster: {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:24.325142    4318 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:23:24.337612    4318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:23:24.345939    4318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:23:24.345951    4318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:23:24.345995    4318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:23:24.354304    4318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:23:24.354625    4318 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-744000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.354704    4318 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1558/kubeconfig needs updating (will repair): [kubeconfig missing "ha-744000" cluster setting kubeconfig missing "ha-744000" context setting]
	I0917 10:23:24.354943    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.355336    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.355573    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:23:24.355889    4318 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 10:23:24.356070    4318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:23:24.364125    4318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 10:23:24.364137    4318 kubeadm.go:597] duration metric: took 18.181933ms to restartPrimaryControlPlane
	I0917 10:23:24.364142    4318 kubeadm.go:394] duration metric: took 39.129847ms to StartCluster
	I0917 10:23:24.364150    4318 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.364222    4318 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.364601    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.364822    4318 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:23:24.364835    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:23:24.364845    4318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:23:24.365364    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:24.407801    4318 out.go:177] * Enabled addons: 
	I0917 10:23:24.449987    4318 addons.go:510] duration metric: took 84.961836ms for enable addons: enabled=[]
	I0917 10:23:24.450005    4318 start.go:246] waiting for cluster config update ...
	I0917 10:23:24.450011    4318 start.go:255] writing updated cluster config ...
	I0917 10:23:24.470905    4318 out.go:201] 
	I0917 10:23:24.492266    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:24.492406    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.514885    4318 out.go:177] * Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	I0917 10:23:24.556844    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:24.556881    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:24.557072    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:24.557091    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:24.557227    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.558233    4318 start.go:360] acquireMachinesLock for ha-744000-m02: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:24.558336    4318 start.go:364] duration metric: took 78.234µs to acquireMachinesLock for "ha-744000-m02"
	I0917 10:23:24.558362    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:24.558375    4318 fix.go:54] fixHost starting: m02
	I0917 10:23:24.558805    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:24.558841    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:24.567958    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51922
	I0917 10:23:24.568283    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:24.568655    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:24.568674    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:24.568935    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:24.569064    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:24.569164    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:23:24.569268    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.569346    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4278
	I0917 10:23:24.570356    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4278 missing from process table
	I0917 10:23:24.570389    4318 fix.go:112] recreateIfNeeded on ha-744000-m02: state=Stopped err=<nil>
	I0917 10:23:24.570398    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	W0917 10:23:24.570487    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:24.612951    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m02" ...
	I0917 10:23:24.633920    4318 main.go:141] libmachine: (ha-744000-m02) Calling .Start
	I0917 10:23:24.634199    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.634258    4318 main.go:141] libmachine: (ha-744000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid
	I0917 10:23:24.636176    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4278 missing from process table
	I0917 10:23:24.636188    4318 main.go:141] libmachine: (ha-744000-m02) DBG | pid 4278 is in state "Stopped"
	I0917 10:23:24.636209    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid...
	I0917 10:23:24.636621    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Using UUID 84417734-d0f3-4fed-a88c-11fa06a6299e
	I0917 10:23:24.663465    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Generated MAC 72:92:6:7e:7d:92
	I0917 10:23:24.663489    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:24.663621    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:24.663651    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:24.663689    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "84417734-d0f3-4fed-a88c-11fa06a6299e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:24.663725    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 84417734-d0f3-4fed-a88c-11fa06a6299e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:24.663736    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:24.665138    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Pid is 4339
	I0917 10:23:24.665538    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Attempt 0
	I0917 10:23:24.665551    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.665623    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:23:24.667294    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Searching for 72:92:6:7e:7d:92 in /var/db/dhcpd_leases ...
	I0917 10:23:24.667331    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:24.667353    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:23:24.667370    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:24.667381    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c3c}
	I0917 10:23:24.667387    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Found match: 72:92:6:7e:7d:92
	I0917 10:23:24.667404    4318 main.go:141] libmachine: (ha-744000-m02) DBG | IP: 192.169.0.6
	I0917 10:23:24.667444    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetConfigRaw
	I0917 10:23:24.668104    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:24.668293    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.668710    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:24.668719    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:24.668846    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:24.668942    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:24.669029    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:24.669114    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:24.669205    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:24.669366    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:24.669585    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:24.669596    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:24.672842    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:24.682575    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:24.683443    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:24.683460    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:24.683476    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:24.683483    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:25.071063    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:25.071079    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:25.186245    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:25.186263    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:25.186274    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:25.186284    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:25.187156    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:25.187168    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:23:30.799209    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:23:30.799230    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:23:30.799236    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:23:30.822685    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:23:33.867917    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0917 10:23:36.934481    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:23:36.934496    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:36.934638    4318 buildroot.go:166] provisioning hostname "ha-744000-m02"
	I0917 10:23:36.934649    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:36.934745    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:36.934837    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:36.934932    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:36.935015    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:36.935112    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:36.935288    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:36.935440    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:36.935451    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m02 && echo "ha-744000-m02" | sudo tee /etc/hostname
	I0917 10:23:37.008879    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m02
	
	I0917 10:23:37.008894    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.009061    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.009159    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.009242    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.009338    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.009486    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.009649    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.009660    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:23:37.078741    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:23:37.078758    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:23:37.078768    4318 buildroot.go:174] setting up certificates
	I0917 10:23:37.078774    4318 provision.go:84] configureAuth start
	I0917 10:23:37.078780    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:37.078916    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:37.079043    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.079131    4318 provision.go:143] copyHostCerts
	I0917 10:23:37.079159    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:37.079221    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:23:37.079228    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:37.079376    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:23:37.079595    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:37.079637    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:23:37.079642    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:37.079718    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:23:37.079893    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:37.079933    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:23:37.079938    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:37.080019    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:23:37.080160    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m02 san=[127.0.0.1 192.169.0.6 ha-744000-m02 localhost minikube]
	I0917 10:23:37.154648    4318 provision.go:177] copyRemoteCerts
	I0917 10:23:37.154702    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:23:37.154717    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.154843    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.154952    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.155045    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.155124    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:37.199228    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:23:37.199298    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:23:37.219018    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:23:37.219098    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:23:37.237862    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:23:37.237936    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:23:37.256979    4318 provision.go:87] duration metric: took 178.197064ms to configureAuth
	I0917 10:23:37.256993    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:23:37.257173    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:37.257186    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:37.257323    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.257405    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.257494    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.257572    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.257650    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.257770    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.257893    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.257901    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:23:37.319570    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:23:37.319583    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:23:37.319682    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:23:37.319696    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.319826    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.319938    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.320027    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.320108    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.320250    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.320387    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.320434    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:23:37.391815    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:23:37.391831    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.391975    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.392081    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.392159    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.392252    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.392374    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.392517    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.392529    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:23:39.075500    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:23:39.075515    4318 machine.go:96] duration metric: took 14.406707663s to provisionDockerMachine
	I0917 10:23:39.075523    4318 start.go:293] postStartSetup for "ha-744000-m02" (driver="hyperkit")
	I0917 10:23:39.075537    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:23:39.075547    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.075750    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:23:39.075764    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.075857    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.075952    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.076033    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.076151    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.119221    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:23:39.122818    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:23:39.122833    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:23:39.122960    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:23:39.123143    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:23:39.123150    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:23:39.123359    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:23:39.133517    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:39.159170    4318 start.go:296] duration metric: took 83.636865ms for postStartSetup
	I0917 10:23:39.159198    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.159385    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:23:39.159399    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.159480    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.159562    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.159664    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.159748    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.198408    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:23:39.198471    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:23:39.229469    4318 fix.go:56] duration metric: took 14.671003724s for fixHost
	I0917 10:23:39.229492    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.229627    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.229719    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.229810    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.229886    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.230020    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:39.230204    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:39.230212    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:23:39.293184    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593819.261870922
	
	I0917 10:23:39.293196    4318 fix.go:216] guest clock: 1726593819.261870922
	I0917 10:23:39.293204    4318 fix.go:229] Guest: 2024-09-17 10:23:39.261870922 -0700 PDT Remote: 2024-09-17 10:23:39.229481 -0700 PDT m=+34.885826601 (delta=32.389922ms)
	I0917 10:23:39.293215    4318 fix.go:200] guest clock delta is within tolerance: 32.389922ms
	I0917 10:23:39.293218    4318 start.go:83] releasing machines lock for "ha-744000-m02", held for 14.734778852s
	I0917 10:23:39.293233    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.293362    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:39.314064    4318 out.go:177] * Found network options:
	I0917 10:23:39.336076    4318 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 10:23:39.357954    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:23:39.357993    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.358861    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.359070    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.359183    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:23:39.359227    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	W0917 10:23:39.359301    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:23:39.359362    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.359383    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:23:39.359396    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.359477    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.359514    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.359570    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.359617    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.359685    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.359724    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.359838    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:23:39.394282    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:23:39.394363    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:23:39.443373    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:23:39.443395    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:39.443489    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:39.459065    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:23:39.468374    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:23:39.477348    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:23:39.477400    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:23:39.486283    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:39.495295    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:23:39.504241    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:39.513081    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:23:39.522253    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:23:39.531218    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:23:39.540147    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:23:39.549122    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:23:39.557208    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:23:39.565185    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:39.663216    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:23:39.682558    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:39.682635    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:23:39.697642    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:39.710638    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:23:39.730208    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:39.740809    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:39.751126    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:23:39.776526    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:39.786854    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:39.801713    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:23:39.804604    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:23:39.811689    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:23:39.825130    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:23:39.919765    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:23:40.027561    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:23:40.027584    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:23:40.041479    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:40.155257    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:23:42.501803    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.346511037s)
	I0917 10:23:42.501877    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:23:42.512430    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:23:42.525247    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:42.535597    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:23:42.632719    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:23:42.733072    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:42.848472    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:23:42.862095    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:42.873097    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:42.974162    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:23:43.038704    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:23:43.038791    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:23:43.043279    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:23:43.043348    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:23:43.046420    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:23:43.072844    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:23:43.072933    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:43.089215    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:43.128559    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:23:43.170903    4318 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 10:23:43.192137    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:43.192563    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:23:43.197213    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:43.206867    4318 mustload.go:65] Loading cluster: ha-744000
	I0917 10:23:43.207054    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:43.207326    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:43.207347    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:43.216115    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51945
	I0917 10:23:43.216443    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:43.216788    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:43.216802    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:43.217026    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:43.217137    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:23:43.217215    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:43.217301    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:23:43.218337    4318 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:23:43.218598    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:43.218625    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:43.227260    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51947
	I0917 10:23:43.227601    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:43.227937    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:43.227951    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:43.228147    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:43.228251    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:43.228345    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.6
	I0917 10:23:43.228352    4318 certs.go:194] generating shared ca certs ...
	I0917 10:23:43.228362    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:43.228527    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:23:43.228599    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:23:43.228607    4318 certs.go:256] generating profile certs ...
	I0917 10:23:43.228718    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:23:43.228804    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.026a9cc7
	I0917 10:23:43.228872    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:23:43.228880    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:23:43.228899    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:23:43.228920    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:23:43.228937    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:23:43.228954    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:23:43.228981    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:23:43.229010    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:23:43.229028    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:23:43.229119    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:23:43.229166    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:23:43.229175    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:23:43.229206    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:23:43.229242    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:23:43.229274    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:23:43.229342    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:43.229373    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.229393    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.229410    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.229434    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:43.229530    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:43.229617    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:43.229683    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:43.229765    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:43.256849    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 10:23:43.260879    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 10:23:43.269481    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 10:23:43.272632    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 10:23:43.280513    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 10:23:43.283582    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 10:23:43.291364    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 10:23:43.294480    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 10:23:43.302789    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 10:23:43.305925    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 10:23:43.313934    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 10:23:43.316968    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 10:23:43.325080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:23:43.345191    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:23:43.364654    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:23:43.384379    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:23:43.404164    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:23:43.424264    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:23:43.444115    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:23:43.463631    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:23:43.483492    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:23:43.502975    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:23:43.522485    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:23:43.543691    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 10:23:43.558295    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 10:23:43.571956    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 10:23:43.585450    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 10:23:43.598936    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 10:23:43.612569    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 10:23:43.626000    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 10:23:43.639468    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:23:43.643552    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:23:43.652183    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.655515    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.655555    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.659696    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:23:43.668232    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:23:43.676488    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.679940    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.679985    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.684222    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:23:43.692551    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:23:43.700894    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.704479    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.704526    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.708650    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:23:43.716969    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:23:43.720371    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:23:43.724736    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:23:43.728968    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:23:43.733213    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:23:43.737400    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:23:43.741597    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:23:43.745820    4318 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0917 10:23:43.745877    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:23:43.745890    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:23:43.745926    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:23:43.758434    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:23:43.758473    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:23:43.758527    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:23:43.766283    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:23:43.766331    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 10:23:43.773641    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 10:23:43.786920    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:23:43.800443    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:23:43.813790    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:23:43.816730    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:43.826099    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:43.934702    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:43.949825    4318 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:23:43.950025    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:43.971583    4318 out.go:177] * Verifying Kubernetes components...
	I0917 10:23:44.013350    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:44.148955    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:44.167233    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:44.167427    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 10:23:44.167473    4318 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 10:23:44.167643    4318 node_ready.go:35] waiting up to 6m0s for node "ha-744000-m02" to be "Ready" ...
	I0917 10:23:44.167726    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:44.167731    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:44.167739    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:44.167743    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.307737    4318 round_trippers.go:574] Response Status: 200 OK in 8139 milliseconds
	I0917 10:23:52.308306    4318 node_ready.go:49] node "ha-744000-m02" has status "Ready":"True"
	I0917 10:23:52.308317    4318 node_ready.go:38] duration metric: took 8.140607385s for node "ha-744000-m02" to be "Ready" ...
	I0917 10:23:52.308324    4318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:23:52.308363    4318 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 10:23:52.308373    4318 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 10:23:52.308426    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:52.308431    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.308441    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.308444    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.320722    4318 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0917 10:23:52.327343    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.327408    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j9jcc
	I0917 10:23:52.327415    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.327421    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.327424    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.333529    4318 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 10:23:52.334030    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.334039    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.334045    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.334048    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.338396    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:52.338672    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.338681    4318 pod_ready.go:82] duration metric: took 11.322168ms for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.338688    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.338729    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-khnlh
	I0917 10:23:52.338734    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.338739    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.338744    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.344023    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:23:52.344589    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.344597    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.344602    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.344606    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.349539    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:52.349983    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.349992    4318 pod_ready.go:82] duration metric: took 11.298293ms for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.349999    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.350040    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000
	I0917 10:23:52.350045    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.350051    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.350055    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.357637    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:23:52.358005    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.358013    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.358019    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.358027    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.365136    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:23:52.365716    4318 pod_ready.go:93] pod "etcd-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.365726    4318 pod_ready.go:82] duration metric: took 15.722025ms for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.365733    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.365780    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m02
	I0917 10:23:52.365789    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.365795    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.365799    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.369072    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.369567    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:52.369575    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.369581    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.369584    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.373049    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.373553    4318 pod_ready.go:93] pod "etcd-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.373563    4318 pod_ready.go:82] duration metric: took 7.825215ms for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.373570    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.373616    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:23:52.373621    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.373626    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.373631    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.376282    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.509242    4318 request.go:632] Waited for 132.500318ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:52.509283    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:52.509290    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.509317    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.509323    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.513207    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.513696    4318 pod_ready.go:93] pod "etcd-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.513705    4318 pod_ready.go:82] duration metric: took 140.128679ms for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.513724    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.709621    4318 request.go:632] Waited for 195.859717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:23:52.709653    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:23:52.709657    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.709664    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.709669    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.711912    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.908496    4318 request.go:632] Waited for 196.021957ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.908552    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.908558    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.908563    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.908566    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.911337    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.911774    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.911783    4318 pod_ready.go:82] duration metric: took 398.052058ms for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.911790    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.108964    4318 request.go:632] Waited for 197.132834ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:23:53.109014    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:23:53.109019    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.109025    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.109029    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.112077    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:53.308769    4318 request.go:632] Waited for 196.065261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:53.308824    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:53.308830    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.308836    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.308840    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.313525    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:53.313816    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:53.313826    4318 pod_ready.go:82] duration metric: took 402.029202ms for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.313836    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.509951    4318 request.go:632] Waited for 196.074667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:23:53.509985    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:23:53.509990    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.510035    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.510042    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.514822    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:53.709150    4318 request.go:632] Waited for 193.647696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:53.709201    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:53.709210    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.709254    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.709264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.712954    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:53.713373    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:53.713382    4318 pod_ready.go:82] duration metric: took 399.538201ms for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.713389    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.908806    4318 request.go:632] Waited for 195.370205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:23:53.908887    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:23:53.908897    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.908909    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.908917    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.911967    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.108997    4318 request.go:632] Waited for 196.429766ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:54.109063    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:54.109070    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.109082    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.109089    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.112475    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.114386    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.114395    4318 pod_ready.go:82] duration metric: took 400.998189ms for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.114402    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.308794    4318 request.go:632] Waited for 194.35354ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:23:54.308838    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:23:54.308874    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.308882    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.308915    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.311225    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:54.508611    4318 request.go:632] Waited for 197.017438ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:54.508643    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:54.508648    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.508654    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.508658    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.513358    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:54.514643    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.514653    4318 pod_ready.go:82] duration metric: took 400.244458ms for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.514660    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.709389    4318 request.go:632] Waited for 194.662221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:23:54.709498    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:23:54.709508    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.709517    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.709522    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.712945    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.908904    4318 request.go:632] Waited for 195.122532ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:54.908956    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:54.908964    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.908976    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.908984    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.912489    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.912833    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.912844    4318 pod_ready.go:82] duration metric: took 398.175427ms for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.912853    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.109718    4318 request.go:632] Waited for 196.795087ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:23:55.109851    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:23:55.109863    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.109874    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.109880    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.113014    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:55.310231    4318 request.go:632] Waited for 196.716951ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:23:55.310297    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:23:55.310304    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.310310    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.310327    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.312467    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.312877    4318 pod_ready.go:93] pod "kube-proxy-66bkb" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:55.312887    4318 pod_ready.go:82] duration metric: took 400.026129ms for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.312894    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.508659    4318 request.go:632] Waited for 195.71304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:23:55.508705    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:23:55.508714    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.508762    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.508776    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.511406    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.709478    4318 request.go:632] Waited for 197.620419ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:55.709553    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:55.709561    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.709569    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.709573    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.712068    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.712400    4318 pod_ready.go:93] pod "kube-proxy-6xd2h" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:55.712409    4318 pod_ready.go:82] duration metric: took 399.507321ms for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.712415    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.908839    4318 request.go:632] Waited for 196.378567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:23:55.908879    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:23:55.908886    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.908894    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.908903    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.911317    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.108670    4318 request.go:632] Waited for 196.90743ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:56.108733    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:56.108741    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.108750    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.108755    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.111013    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.111432    4318 pod_ready.go:93] pod "kube-proxy-c5xbc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.111441    4318 pod_ready.go:82] duration metric: took 399.01941ms for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.111448    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.309131    4318 request.go:632] Waited for 197.638325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:23:56.309195    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:23:56.309203    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.309211    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.309218    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.311722    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.510036    4318 request.go:632] Waited for 197.949522ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:56.510102    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:56.510108    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.510114    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.510116    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.514224    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:56.514571    4318 pod_ready.go:93] pod "kube-proxy-k9xsp" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.514581    4318 pod_ready.go:82] duration metric: took 403.125717ms for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.514588    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.708850    4318 request.go:632] Waited for 194.175339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:23:56.708991    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:23:56.709003    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.709014    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.709019    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.712753    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:56.909408    4318 request.go:632] Waited for 196.094397ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:56.909453    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:56.909458    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.909464    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.909469    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.911617    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.911990    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.911998    4318 pod_ready.go:82] duration metric: took 397.403001ms for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.912004    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.108563    4318 request.go:632] Waited for 196.516714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:23:57.108623    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:23:57.108651    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.108657    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.108661    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.111145    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:57.310537    4318 request.go:632] Waited for 198.433255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:57.310658    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:57.310670    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.310681    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.310688    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.313850    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:57.314399    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:57.314411    4318 pod_ready.go:82] duration metric: took 402.398279ms for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.314420    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.508583    4318 request.go:632] Waited for 194.120837ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:23:57.508650    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:23:57.508656    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.508662    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.508667    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.510939    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:57.709335    4318 request.go:632] Waited for 198.006371ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:57.709452    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:57.709463    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.709475    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.709482    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.712690    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:57.713150    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:57.713163    4318 pod_ready.go:82] duration metric: took 398.73468ms for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.713172    4318 pod_ready.go:39] duration metric: took 5.404804093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:23:57.713193    4318 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:23:57.713279    4318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:23:57.724647    4318 api_server.go:72] duration metric: took 13.774712051s to wait for apiserver process to appear ...
	I0917 10:23:57.724659    4318 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:23:57.724675    4318 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 10:23:57.728863    4318 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 10:23:57.728906    4318 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 10:23:57.728911    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.728929    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.728935    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.729498    4318 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 10:23:57.729550    4318 api_server.go:141] control plane version: v1.31.1
	I0917 10:23:57.729558    4318 api_server.go:131] duration metric: took 4.895474ms to wait for apiserver health ...
	I0917 10:23:57.729563    4318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 10:23:57.909401    4318 request.go:632] Waited for 179.781674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:57.909604    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:57.909621    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.909636    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.909648    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.914890    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:23:57.920746    4318 system_pods.go:59] 26 kube-system pods found
	I0917 10:23:57.920767    4318 system_pods.go:61] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running
	I0917 10:23:57.920771    4318 system_pods.go:61] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running
	I0917 10:23:57.920774    4318 system_pods.go:61] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:23:57.920780    4318 system_pods.go:61] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 10:23:57.920785    4318 system_pods.go:61] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:23:57.920789    4318 system_pods.go:61] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:23:57.920791    4318 system_pods.go:61] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:23:57.920796    4318 system_pods.go:61] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 10:23:57.920802    4318 system_pods.go:61] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:23:57.920805    4318 system_pods.go:61] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:23:57.920808    4318 system_pods.go:61] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 10:23:57.920811    4318 system_pods.go:61] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:23:57.920815    4318 system_pods.go:61] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:23:57.920819    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 10:23:57.920824    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:23:57.920827    4318 system_pods.go:61] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:23:57.920829    4318 system_pods.go:61] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:23:57.920832    4318 system_pods.go:61] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:23:57.920836    4318 system_pods.go:61] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 10:23:57.920839    4318 system_pods.go:61] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:23:57.920844    4318 system_pods.go:61] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 10:23:57.920848    4318 system_pods.go:61] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:23:57.920851    4318 system_pods.go:61] "kube-vip-ha-744000" [4613d53e-c3b7-48eb-bb87-057beab671e7] Running
	I0917 10:23:57.920858    4318 system_pods.go:61] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:23:57.920862    4318 system_pods.go:61] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:23:57.920864    4318 system_pods.go:61] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:23:57.920868    4318 system_pods.go:74] duration metric: took 191.300068ms to wait for pod list to return data ...
	I0917 10:23:57.920876    4318 default_sa.go:34] waiting for default service account to be created ...
	I0917 10:23:58.108816    4318 request.go:632] Waited for 187.888047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:23:58.108877    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:23:58.108885    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.108893    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.108898    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.111818    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:58.111952    4318 default_sa.go:45] found service account: "default"
	I0917 10:23:58.111961    4318 default_sa.go:55] duration metric: took 191.079569ms for default service account to be created ...
	I0917 10:23:58.111967    4318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 10:23:58.309003    4318 request.go:632] Waited for 196.929892ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:58.309102    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:58.309111    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.309136    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.309143    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.314149    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:58.319524    4318 system_pods.go:86] 26 kube-system pods found
	I0917 10:23:58.319535    4318 system_pods.go:89] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running
	I0917 10:23:58.319541    4318 system_pods.go:89] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running
	I0917 10:23:58.319544    4318 system_pods.go:89] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:23:58.319549    4318 system_pods.go:89] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 10:23:58.319554    4318 system_pods.go:89] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:23:58.319557    4318 system_pods.go:89] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:23:58.319567    4318 system_pods.go:89] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:23:58.319571    4318 system_pods.go:89] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 10:23:58.319580    4318 system_pods.go:89] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:23:58.319584    4318 system_pods.go:89] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:23:58.319588    4318 system_pods.go:89] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 10:23:58.319591    4318 system_pods.go:89] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:23:58.319595    4318 system_pods.go:89] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:23:58.319599    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 10:23:58.319602    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:23:58.319612    4318 system_pods.go:89] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:23:58.319616    4318 system_pods.go:89] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:23:58.319618    4318 system_pods.go:89] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:23:58.319622    4318 system_pods.go:89] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 10:23:58.319628    4318 system_pods.go:89] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:23:58.319632    4318 system_pods.go:89] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 10:23:58.319635    4318 system_pods.go:89] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:23:58.319639    4318 system_pods.go:89] "kube-vip-ha-744000" [4613d53e-c3b7-48eb-bb87-057beab671e7] Running
	I0917 10:23:58.319642    4318 system_pods.go:89] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:23:58.319644    4318 system_pods.go:89] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:23:58.319647    4318 system_pods.go:89] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:23:58.319651    4318 system_pods.go:126] duration metric: took 207.678997ms to wait for k8s-apps to be running ...
	I0917 10:23:58.319662    4318 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 10:23:58.319720    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:23:58.331325    4318 system_svc.go:56] duration metric: took 11.65971ms WaitForService to wait for kubelet
	I0917 10:23:58.331338    4318 kubeadm.go:582] duration metric: took 14.381399967s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:23:58.331366    4318 node_conditions.go:102] verifying NodePressure condition ...
	I0917 10:23:58.509807    4318 request.go:632] Waited for 178.384911ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 10:23:58.509886    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 10:23:58.509895    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.509908    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.509913    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.514102    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:58.514949    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514961    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514970    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514973    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514976    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514979    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514982    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514995    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.515002    4318 node_conditions.go:105] duration metric: took 183.62967ms to run NodePressure ...
	I0917 10:23:58.515010    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:23:58.515030    4318 start.go:255] writing updated cluster config ...
	I0917 10:23:58.535539    4318 out.go:201] 
	I0917 10:23:58.573360    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:58.573455    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.595258    4318 out.go:177] * Starting "ha-744000-m03" control-plane node in "ha-744000" cluster
	I0917 10:23:58.653092    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:58.653125    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:58.653337    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:58.653370    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:58.653501    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.654346    4318 start.go:360] acquireMachinesLock for ha-744000-m03: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:58.654469    4318 start.go:364] duration metric: took 97.666µs to acquireMachinesLock for "ha-744000-m03"
	I0917 10:23:58.654496    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:58.654503    4318 fix.go:54] fixHost starting: m03
	I0917 10:23:58.655039    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:58.655076    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:58.665444    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0917 10:23:58.665867    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:58.666300    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:58.666321    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:58.666529    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:58.666645    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:23:58.666734    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetState
	I0917 10:23:58.666815    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.666929    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid from json: 3837
	I0917 10:23:58.667977    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid 3837 missing from process table
	I0917 10:23:58.668019    4318 fix.go:112] recreateIfNeeded on ha-744000-m03: state=Stopped err=<nil>
	I0917 10:23:58.668029    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	W0917 10:23:58.668111    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:58.707286    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m03" ...
	I0917 10:23:58.781042    4318 main.go:141] libmachine: (ha-744000-m03) Calling .Start
	I0917 10:23:58.781398    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.781451    4318 main.go:141] libmachine: (ha-744000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid
	I0917 10:23:58.783354    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid 3837 missing from process table
	I0917 10:23:58.783371    4318 main.go:141] libmachine: (ha-744000-m03) DBG | pid 3837 is in state "Stopped"
	I0917 10:23:58.783401    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid...
	I0917 10:23:58.783560    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Using UUID 2629e9cb-d7e0-4a36-a6bd-c4320ca3711f
	I0917 10:23:58.808610    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Generated MAC 5a:8d:be:33:c3:18
	I0917 10:23:58.808632    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:58.808748    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004040c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:58.808788    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004040c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:58.808853    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/ha-744000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:58.808899    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2629e9cb-d7e0-4a36-a6bd-c4320ca3711f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/ha-744000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:58.808915    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:58.810278    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Pid is 4346
	I0917 10:23:58.810623    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Attempt 0
	I0917 10:23:58.810633    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.810707    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid from json: 4346
	I0917 10:23:58.812422    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Searching for 5a:8d:be:33:c3:18 in /var/db/dhcpd_leases ...
	I0917 10:23:58.812491    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:58.812547    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:23:58.812578    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:23:58.812610    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:58.812627    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetConfigRaw
	I0917 10:23:58.812629    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0ba8}
	I0917 10:23:58.812645    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Found match: 5a:8d:be:33:c3:18
	I0917 10:23:58.812659    4318 main.go:141] libmachine: (ha-744000-m03) DBG | IP: 192.169.0.7
	I0917 10:23:58.813322    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:23:58.813511    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.814083    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:58.814095    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:23:58.814255    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:23:58.814354    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:23:58.814443    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:23:58.814551    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:23:58.814660    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:23:58.814840    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:58.815013    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:23:58.815022    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:58.818431    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:58.826878    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:58.827963    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:58.827996    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:58.828016    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:58.828056    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:59.216264    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:59.216286    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:59.331075    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:59.331093    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:59.331106    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:59.331113    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:59.331943    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:59.331953    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:24:04.953344    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:24:04.953400    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:24:04.953409    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:24:04.976712    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:24:08.843565    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.7:22: connect: connection refused
	I0917 10:24:11.901419    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:24:11.901434    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:11.901561    4318 buildroot.go:166] provisioning hostname "ha-744000-m03"
	I0917 10:24:11.901572    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:11.901663    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:11.901749    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:11.901841    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.901928    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.902023    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:11.902156    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:11.902302    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:11.902310    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m03 && echo "ha-744000-m03" | sudo tee /etc/hostname
	I0917 10:24:11.969021    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m03
	
	I0917 10:24:11.969036    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:11.969172    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:11.969284    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.969390    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.969484    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:11.969628    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:11.969778    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:11.969789    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:24:12.032993    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:24:12.033009    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:24:12.033021    4318 buildroot.go:174] setting up certificates
	I0917 10:24:12.033027    4318 provision.go:84] configureAuth start
	I0917 10:24:12.033034    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:12.033164    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:12.033268    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.033363    4318 provision.go:143] copyHostCerts
	I0917 10:24:12.033396    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:24:12.033443    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:24:12.033450    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:24:12.033597    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:24:12.033799    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:24:12.033838    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:24:12.033843    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:24:12.033926    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:24:12.034067    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:24:12.034095    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:24:12.034100    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:24:12.034194    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:24:12.034361    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m03 san=[127.0.0.1 192.169.0.7 ha-744000-m03 localhost minikube]
	I0917 10:24:12.149328    4318 provision.go:177] copyRemoteCerts
	I0917 10:24:12.149388    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:24:12.149403    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.149590    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.149685    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.149761    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.149846    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:12.184712    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:24:12.184807    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:24:12.204199    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:24:12.204267    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 10:24:12.223758    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:24:12.223831    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:24:12.243169    4318 provision.go:87] duration metric: took 210.132957ms to configureAuth
	I0917 10:24:12.243183    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:24:12.243371    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:12.243385    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:12.243518    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.243598    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.243687    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.243761    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.243855    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.243970    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.244103    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.244110    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:24:12.301530    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:24:12.301541    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:24:12.301620    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:24:12.301632    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.301763    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.301869    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.301966    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.302040    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.302167    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.302303    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.302348    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:24:12.370095    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:24:12.370113    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.370241    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.370333    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.370424    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.370523    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.370657    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.370794    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.370805    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:24:14.004628    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:24:14.004644    4318 machine.go:96] duration metric: took 15.190455794s to provisionDockerMachine
	I0917 10:24:14.004650    4318 start.go:293] postStartSetup for "ha-744000-m03" (driver="hyperkit")
	I0917 10:24:14.004657    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:24:14.004672    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.004878    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:24:14.004901    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.005017    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.005138    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.005237    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.005322    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.044460    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:24:14.048554    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:24:14.048568    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:24:14.048680    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:24:14.048820    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:24:14.048826    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:24:14.048988    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:24:14.057354    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:24:14.088743    4318 start.go:296] duration metric: took 84.082897ms for postStartSetup
	I0917 10:24:14.088765    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.088958    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:24:14.088972    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.089062    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.089149    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.089239    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.089326    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.124314    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:24:14.124387    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:24:14.177086    4318 fix.go:56] duration metric: took 15.522482042s for fixHost
	I0917 10:24:14.177117    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.177268    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.177375    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.177470    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.177560    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.177699    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:14.177847    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:14.177855    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:24:14.235217    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593854.127008624
	
	I0917 10:24:14.235235    4318 fix.go:216] guest clock: 1726593854.127008624
	I0917 10:24:14.235240    4318 fix.go:229] Guest: 2024-09-17 10:24:14.127008624 -0700 PDT Remote: 2024-09-17 10:24:14.177103 -0700 PDT m=+69.833227660 (delta=-50.094376ms)
	I0917 10:24:14.235251    4318 fix.go:200] guest clock delta is within tolerance: -50.094376ms
	I0917 10:24:14.235255    4318 start.go:83] releasing machines lock for "ha-744000-m03", held for 15.580676894s
	I0917 10:24:14.235272    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.235402    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:14.257745    4318 out.go:177] * Found network options:
	I0917 10:24:14.279018    4318 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0917 10:24:14.300830    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:24:14.300855    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:24:14.300870    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301356    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301486    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301594    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:24:14.301623    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	W0917 10:24:14.301663    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:24:14.301685    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:24:14.301770    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:24:14.301785    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.301824    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.301934    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.301945    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.302070    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.302137    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.302238    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.302321    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.302438    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	W0917 10:24:14.334246    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:24:14.334313    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:24:14.380907    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:24:14.380924    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:24:14.381008    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:24:14.397032    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:24:14.406169    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:24:14.415306    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:24:14.415369    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:24:14.424550    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:24:14.435946    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:24:14.448076    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:24:14.457027    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:24:14.466527    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:24:14.475918    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:24:14.484801    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:24:14.494039    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:24:14.502344    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:24:14.510724    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:14.608373    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:24:14.627463    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:24:14.627552    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:24:14.644673    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:24:14.657243    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:24:14.675019    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:24:14.686098    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:24:14.697382    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:24:14.722583    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:24:14.734058    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:24:14.749179    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:24:14.752033    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:24:14.760199    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:24:14.773743    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:24:14.866897    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:24:14.972459    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:24:14.972482    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:24:14.986205    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:15.081962    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:24:17.363023    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.281026419s)
	I0917 10:24:17.363099    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:24:17.373222    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:24:17.386396    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:24:17.397093    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:24:17.488832    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:24:17.603916    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:17.712002    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:24:17.725875    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:24:17.737346    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:17.846138    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:24:17.910308    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:24:17.910400    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:24:17.914917    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:24:17.914984    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:24:17.918153    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:24:17.947145    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:24:17.947245    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:24:17.963719    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:24:18.000615    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:24:18.042227    4318 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 10:24:18.063289    4318 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 10:24:18.084167    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:18.084404    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:24:18.087640    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:24:18.098050    4318 mustload.go:65] Loading cluster: ha-744000
	I0917 10:24:18.098230    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:18.098462    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:18.098484    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:18.107325    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51975
	I0917 10:24:18.107666    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:18.108009    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:18.108026    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:18.108255    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:18.108371    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:24:18.108467    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:18.108528    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:24:18.109600    4318 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:24:18.109898    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:18.109929    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:18.118725    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51977
	I0917 10:24:18.119073    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:18.119409    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:18.119421    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:18.119635    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:18.119739    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:24:18.119820    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.7
	I0917 10:24:18.119829    4318 certs.go:194] generating shared ca certs ...
	I0917 10:24:18.119841    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:24:18.119995    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:24:18.120047    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:24:18.120060    4318 certs.go:256] generating profile certs ...
	I0917 10:24:18.120159    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:24:18.120243    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.2fbb59ab
	I0917 10:24:18.120301    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:24:18.120308    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:24:18.120350    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:24:18.120376    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:24:18.120395    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:24:18.120412    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:24:18.120438    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:24:18.120458    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:24:18.120476    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:24:18.120563    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:24:18.120603    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:24:18.120612    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:24:18.120645    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:24:18.120678    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:24:18.120708    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:24:18.120780    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:24:18.120814    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.120834    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.120851    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.120877    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:24:18.120957    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:24:18.121043    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:24:18.121130    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:24:18.121202    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:24:18.147236    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 10:24:18.150493    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 10:24:18.158955    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 10:24:18.162129    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 10:24:18.169902    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 10:24:18.173023    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 10:24:18.181042    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 10:24:18.184431    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 10:24:18.192679    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 10:24:18.195793    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 10:24:18.203953    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 10:24:18.207044    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 10:24:18.215067    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:24:18.235596    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:24:18.255384    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:24:18.274936    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:24:18.294598    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:24:18.314207    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:24:18.333653    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:24:18.352964    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:24:18.372887    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:24:18.392444    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:24:18.412080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:24:18.431948    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 10:24:18.445500    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 10:24:18.459362    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 10:24:18.473399    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 10:24:18.487272    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 10:24:18.501703    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 10:24:18.515561    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 10:24:18.529533    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:24:18.533858    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:24:18.543223    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.546597    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.546657    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.550937    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:24:18.560220    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:24:18.569425    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.572837    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.572891    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.577272    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:24:18.586607    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:24:18.596344    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.600052    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.600113    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.604520    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:24:18.614023    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:24:18.617509    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:24:18.621851    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:24:18.626160    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:24:18.630354    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:24:18.634589    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:24:18.638973    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:24:18.643298    4318 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.1 docker true true} ...
	I0917 10:24:18.643362    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:24:18.643382    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:24:18.643427    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:24:18.656418    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:24:18.656455    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:24:18.656516    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:24:18.665097    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:24:18.665163    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 10:24:18.673393    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 10:24:18.687079    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:24:18.701092    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:24:18.714815    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:24:18.717763    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:24:18.727902    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:18.829461    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:24:18.842084    4318 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:24:18.842275    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:18.863032    4318 out.go:177] * Verifying Kubernetes components...
	I0917 10:24:18.883865    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:18.998710    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:24:19.010018    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:24:19.010220    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 10:24:19.010257    4318 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 10:24:19.010447    4318 node_ready.go:35] waiting up to 6m0s for node "ha-744000-m03" to be "Ready" ...
	I0917 10:24:19.010490    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.010495    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.010502    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.010506    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.012607    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.012878    4318 node_ready.go:49] node "ha-744000-m03" has status "Ready":"True"
	I0917 10:24:19.012890    4318 node_ready.go:38] duration metric: took 2.431907ms for node "ha-744000-m03" to be "Ready" ...
	I0917 10:24:19.012896    4318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:24:19.012942    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:19.012948    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.012953    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.012957    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.016637    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:19.021780    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.021832    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j9jcc
	I0917 10:24:19.021838    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.021845    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.021849    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.023987    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.024523    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.024531    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.024537    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.024540    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.026255    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.026592    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.026602    4318 pod_ready.go:82] duration metric: took 4.810235ms for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.026609    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.026651    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-khnlh
	I0917 10:24:19.026656    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.026661    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.026665    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.028592    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.029028    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.029035    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.029041    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.029046    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.031043    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.031318    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.031326    4318 pod_ready.go:82] duration metric: took 4.71115ms for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.031340    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.031385    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000
	I0917 10:24:19.031390    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.031395    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.031400    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.033205    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.033583    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.033590    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.033596    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.033600    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.035534    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.035980    4318 pod_ready.go:93] pod "etcd-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.035990    4318 pod_ready.go:82] duration metric: took 4.645198ms for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.035996    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.036034    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m02
	I0917 10:24:19.036039    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.036044    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.036047    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.038093    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.038513    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:19.038520    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.038526    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.038529    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.040485    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.041086    4318 pod_ready.go:93] pod "etcd-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.041096    4318 pod_ready.go:82] duration metric: took 5.095487ms for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.041103    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.210917    4318 request.go:632] Waited for 169.774559ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:24:19.210994    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:24:19.211005    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.211012    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.211017    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.219188    4318 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 10:24:19.410612    4318 request.go:632] Waited for 190.84697ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.410658    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.410668    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.410679    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.410688    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.427654    4318 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0917 10:24:19.428047    4318 pod_ready.go:93] pod "etcd-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.428057    4318 pod_ready.go:82] duration metric: took 386.946972ms for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.428069    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.611188    4318 request.go:632] Waited for 183.076824ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:24:19.611240    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:24:19.611249    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.611257    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.611264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.622189    4318 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0917 10:24:19.811318    4318 request.go:632] Waited for 187.797206ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.811366    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.811407    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.811419    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.811426    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.823164    4318 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0917 10:24:19.823509    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.823520    4318 pod_ready.go:82] duration metric: took 395.442485ms for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.823528    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.010832    4318 request.go:632] Waited for 187.259959ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:24:20.010872    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:24:20.010876    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.010913    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.010919    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.016809    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:20.210576    4318 request.go:632] Waited for 193.290597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:20.210656    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:20.210663    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.210675    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.210681    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.241143    4318 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0917 10:24:20.242017    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:20.242029    4318 pod_ready.go:82] duration metric: took 418.492753ms for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.242037    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.412058    4318 request.go:632] Waited for 169.980212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.412108    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.412115    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.412119    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.412124    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.426145    4318 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0917 10:24:20.611816    4318 request.go:632] Waited for 184.70602ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:20.611860    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:20.611919    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.611928    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.611934    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.620369    4318 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 10:24:20.811031    4318 request.go:632] Waited for 68.064136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.811067    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.811073    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.811120    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.811130    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.814429    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:21.010914    4318 request.go:632] Waited for 195.866244ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.010969    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.010976    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.010982    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.010986    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.013773    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.243275    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:21.243312    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.243339    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.243347    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.246247    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.411834    4318 request.go:632] Waited for 165.11515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.411870    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.411880    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.411906    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.411911    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.414456    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.742665    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:21.742680    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.742687    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.742691    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.745790    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:21.812507    4318 request.go:632] Waited for 66.156229ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.812582    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.812590    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.812600    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.812608    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.820287    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:24:22.242306    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:22.242320    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.242327    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.242331    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.244398    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.244874    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:22.244882    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.244888    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.244892    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.246990    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.247323    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:22.742294    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:22.742306    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.742313    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.742316    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.744814    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.745729    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:22.745740    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.745748    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.745751    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.748226    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.242342    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:23.242353    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.242359    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.242363    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.244374    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.244841    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:23.244851    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.244856    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.244861    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.246650    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:23.742870    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:23.742914    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.742924    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.742931    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.745627    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.746052    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:23.746060    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.746065    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.746068    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.747609    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.242218    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:24.242231    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.242238    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.242242    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.244278    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:24.244830    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:24.244840    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.244846    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.244849    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.246617    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.743710    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:24.743732    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.743767    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.743774    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.746703    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:24.747074    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:24.747081    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.747086    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.747091    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.748857    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.749268    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:25.243132    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:25.243162    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.243175    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.243182    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.246637    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:25.247243    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:25.247251    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.247257    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.247261    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.248791    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:25.743144    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:25.743185    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.743194    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.743200    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.745534    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:25.746096    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:25.746104    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.746110    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.746114    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.747777    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:26.243397    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:26.243422    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.243434    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.243439    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.246724    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:26.247251    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:26.247258    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.247264    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.247267    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.248850    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:26.743796    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:26.743812    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.743818    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.743822    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.746038    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:26.746535    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:26.746543    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.746548    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.746552    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.748223    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:27.243865    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:27.243907    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.243915    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.243921    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.246152    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:27.246675    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:27.246682    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.246690    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.246694    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.248406    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:27.248807    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:27.743171    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:27.743187    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.743194    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.743198    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.745500    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:27.745988    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:27.745997    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.746002    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.746006    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.748595    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:28.242282    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:28.242301    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.242313    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.242319    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.245501    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:28.246247    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:28.246255    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.246261    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.246264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.247902    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:28.743212    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:28.743236    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.743249    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.743260    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.746405    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:28.747013    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:28.747024    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.747033    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.747036    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.748962    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:29.242696    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:29.242721    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.242759    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.242768    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.246203    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:29.246735    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:29.246743    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.246748    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.246751    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.248540    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:29.248873    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:29.742874    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:29.742909    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.742916    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.742920    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.745853    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:29.746241    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:29.746248    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.746254    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.746258    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.747886    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:30.242344    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:30.242398    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.242412    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.242417    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.245482    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:30.246231    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:30.246239    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.246243    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.246249    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.247931    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:30.743687    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:30.743739    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.743748    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.743754    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.746284    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:30.746897    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:30.746904    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.746910    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.746919    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.748657    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.242762    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:31.242802    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.242815    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.242821    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.244879    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:31.245288    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:31.245296    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.245302    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.245305    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.246940    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.744167    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:31.744190    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.744201    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.744210    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.747694    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:31.748330    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:31.748354    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.748359    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.748363    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.750021    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.750280    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:32.243257    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:32.243276    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.243287    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.243295    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.246666    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:32.247294    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:32.247301    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.247307    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.247315    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.249071    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:32.742445    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:32.742465    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.742477    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.742486    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.745063    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:32.745573    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:32.745581    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.745586    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.745590    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.747244    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.242932    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:33.242948    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.242957    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.242960    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.245698    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:33.246162    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.246170    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.246176    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.246180    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.248030    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.743607    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:33.743630    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.743677    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.743686    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.747091    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:33.747696    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.747706    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.747715    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.747721    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.749482    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.749881    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.749891    4318 pod_ready.go:82] duration metric: took 13.507764282s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.749898    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.749929    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:24:33.749934    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.749939    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.749944    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.751607    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.752009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:33.752016    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.752022    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.752026    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.753479    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.753776    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.753784    4318 pod_ready.go:82] duration metric: took 3.88171ms for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.753790    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.753823    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:24:33.753827    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.753833    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.753838    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.755454    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.755911    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:33.755918    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.755924    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.755927    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.757319    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.757679    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.757688    4318 pod_ready.go:82] duration metric: took 3.892056ms for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.757694    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.757728    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:24:33.757735    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.757741    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.757744    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.759325    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.759692    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.759699    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.759705    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.759708    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.761363    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.761694    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.761703    4318 pod_ready.go:82] duration metric: took 4.003379ms for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.761709    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.761744    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:24:33.761749    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.761754    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.761759    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.763321    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.763721    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:24:33.763727    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.763733    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.763737    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.765414    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.765712    4318 pod_ready.go:93] pod "kube-proxy-66bkb" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.765720    4318 pod_ready.go:82] duration metric: took 4.007111ms for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.765726    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.944183    4318 request.go:632] Waited for 178.404523ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:24:33.944229    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:24:33.944237    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.944268    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.944273    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.946730    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.143628    4318 request.go:632] Waited for 196.302632ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:34.143662    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:34.143667    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.143673    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.143676    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.145586    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:34.145943    4318 pod_ready.go:93] pod "kube-proxy-6xd2h" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.145952    4318 pod_ready.go:82] duration metric: took 380.218476ms for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.145958    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.343736    4318 request.go:632] Waited for 197.699564ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:24:34.343783    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:24:34.343789    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.343820    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.343834    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.346285    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.544565    4318 request.go:632] Waited for 197.654167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:34.544605    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:34.544613    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.544621    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.544627    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.547228    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.547536    4318 pod_ready.go:93] pod "kube-proxy-c5xbc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.547544    4318 pod_ready.go:82] duration metric: took 401.579042ms for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.547551    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.745694    4318 request.go:632] Waited for 198.04491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:24:34.745741    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:24:34.745751    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.745761    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.745768    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.749007    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:34.944446    4318 request.go:632] Waited for 194.709353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:34.944508    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:34.944519    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.944530    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.944538    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.948023    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:34.948529    4318 pod_ready.go:93] pod "kube-proxy-k9xsp" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.948539    4318 pod_ready.go:82] duration metric: took 400.98043ms for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.948546    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.144352    4318 request.go:632] Waited for 195.670277ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:24:35.144418    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:24:35.144427    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.144435    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.144444    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.148047    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:35.345672    4318 request.go:632] Waited for 197.054602ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:35.345814    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:35.345826    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.345837    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.345847    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.350008    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:24:35.350440    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:35.350449    4318 pod_ready.go:82] duration metric: took 401.89555ms for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.350455    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.545736    4318 request.go:632] Waited for 195.218553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:24:35.545818    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:24:35.545826    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.545834    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.545838    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.548444    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:35.743956    4318 request.go:632] Waited for 195.068268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:35.744009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:35.744018    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.744069    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.744076    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.747579    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:35.748084    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:35.748097    4318 pod_ready.go:82] duration metric: took 397.633311ms for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.748105    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.943849    4318 request.go:632] Waited for 195.677443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:35.943994    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:35.944005    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.944016    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.944023    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.947546    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.144032    4318 request.go:632] Waited for 195.696928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.144124    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.144136    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.144152    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.144160    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.147113    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:36.344824    4318 request.go:632] Waited for 96.483405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.344983    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.344994    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.345004    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.345015    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.348529    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.544910    4318 request.go:632] Waited for 195.649777ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.545008    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.545020    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.545031    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.545037    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.548104    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.748291    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.748355    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.748369    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.748376    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.751622    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.945151    4318 request.go:632] Waited for 192.867405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.945191    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.945197    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.945223    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.945245    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.948349    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.249285    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:37.249335    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.249350    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.249356    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.252559    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.344915    4318 request.go:632] Waited for 91.666148ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:37.345009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:37.345019    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.345029    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.345039    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.348586    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.348906    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:37.348918    4318 pod_ready.go:82] duration metric: took 1.600795502s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:37.348928    4318 pod_ready.go:39] duration metric: took 18.335907637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:24:37.348941    4318 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:24:37.349014    4318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:24:37.361991    4318 api_server.go:72] duration metric: took 18.519766947s to wait for apiserver process to appear ...
	I0917 10:24:37.362004    4318 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:24:37.362016    4318 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 10:24:37.365142    4318 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 10:24:37.365173    4318 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 10:24:37.365178    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.365184    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.365188    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.365770    4318 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 10:24:37.365800    4318 api_server.go:141] control plane version: v1.31.1
	I0917 10:24:37.365807    4318 api_server.go:131] duration metric: took 3.798093ms to wait for apiserver health ...
	I0917 10:24:37.365812    4318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 10:24:37.544057    4318 request.go:632] Waited for 178.188238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.544191    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.544207    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.544224    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.544234    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.549291    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:37.554725    4318 system_pods.go:59] 26 kube-system pods found
	I0917 10:24:37.554740    4318 system_pods.go:61] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.554746    4318 system_pods.go:61] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.554752    4318 system_pods.go:61] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:24:37.554756    4318 system_pods.go:61] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running
	I0917 10:24:37.554759    4318 system_pods.go:61] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:24:37.554761    4318 system_pods.go:61] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:24:37.554764    4318 system_pods.go:61] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:24:37.554769    4318 system_pods.go:61] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running
	I0917 10:24:37.554772    4318 system_pods.go:61] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:24:37.554774    4318 system_pods.go:61] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:24:37.554778    4318 system_pods.go:61] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running
	I0917 10:24:37.554781    4318 system_pods.go:61] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:24:37.554784    4318 system_pods.go:61] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:24:37.554787    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running
	I0917 10:24:37.554791    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:24:37.554794    4318 system_pods.go:61] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:24:37.554797    4318 system_pods.go:61] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:24:37.554800    4318 system_pods.go:61] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:24:37.554802    4318 system_pods.go:61] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running
	I0917 10:24:37.554805    4318 system_pods.go:61] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:24:37.554808    4318 system_pods.go:61] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running
	I0917 10:24:37.554811    4318 system_pods.go:61] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:24:37.554813    4318 system_pods.go:61] "kube-vip-ha-744000" [bcb8c990-8b77-4e1d-bf96-614e9da8bf60] Running
	I0917 10:24:37.554816    4318 system_pods.go:61] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:24:37.554818    4318 system_pods.go:61] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:24:37.554821    4318 system_pods.go:61] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:24:37.554825    4318 system_pods.go:74] duration metric: took 189.008209ms to wait for pod list to return data ...
	I0917 10:24:37.554830    4318 default_sa.go:34] waiting for default service account to be created ...
	I0917 10:24:37.744848    4318 request.go:632] Waited for 189.951036ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:24:37.744937    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:24:37.744950    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.744962    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.744968    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.748818    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.748898    4318 default_sa.go:45] found service account: "default"
	I0917 10:24:37.748910    4318 default_sa.go:55] duration metric: took 194.07297ms for default service account to be created ...
	I0917 10:24:37.748917    4318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 10:24:37.945360    4318 request.go:632] Waited for 196.381657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.945493    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.945504    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.945515    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.945524    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.951048    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:37.956873    4318 system_pods.go:86] 26 kube-system pods found
	I0917 10:24:37.956886    4318 system_pods.go:89] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.956893    4318 system_pods.go:89] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.956898    4318 system_pods.go:89] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:24:37.956901    4318 system_pods.go:89] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running
	I0917 10:24:37.956905    4318 system_pods.go:89] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:24:37.956908    4318 system_pods.go:89] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:24:37.956910    4318 system_pods.go:89] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:24:37.956915    4318 system_pods.go:89] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running
	I0917 10:24:37.956918    4318 system_pods.go:89] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:24:37.956921    4318 system_pods.go:89] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:24:37.956927    4318 system_pods.go:89] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running
	I0917 10:24:37.956931    4318 system_pods.go:89] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:24:37.956933    4318 system_pods.go:89] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:24:37.956939    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running
	I0917 10:24:37.956943    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:24:37.956945    4318 system_pods.go:89] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:24:37.956948    4318 system_pods.go:89] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:24:37.956951    4318 system_pods.go:89] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:24:37.956954    4318 system_pods.go:89] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running
	I0917 10:24:37.956957    4318 system_pods.go:89] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:24:37.956960    4318 system_pods.go:89] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running
	I0917 10:24:37.956962    4318 system_pods.go:89] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:24:37.956966    4318 system_pods.go:89] "kube-vip-ha-744000" [bcb8c990-8b77-4e1d-bf96-614e9da8bf60] Running
	I0917 10:24:37.956968    4318 system_pods.go:89] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:24:37.956972    4318 system_pods.go:89] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:24:37.956975    4318 system_pods.go:89] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:24:37.956980    4318 system_pods.go:126] duration metric: took 208.057925ms to wait for k8s-apps to be running ...
	I0917 10:24:37.956985    4318 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 10:24:37.957044    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:24:37.968066    4318 system_svc.go:56] duration metric: took 11.076755ms WaitForService to wait for kubelet
	I0917 10:24:37.968081    4318 kubeadm.go:582] duration metric: took 19.125854064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:24:37.968093    4318 node_conditions.go:102] verifying NodePressure condition ...
	I0917 10:24:38.144749    4318 request.go:632] Waited for 176.615288ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 10:24:38.144801    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 10:24:38.144806    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:38.144812    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:38.144819    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:38.147413    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:38.148237    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148247    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148254    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148257    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148261    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148265    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148268    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148271    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148274    4318 node_conditions.go:105] duration metric: took 180.176513ms to run NodePressure ...
	I0917 10:24:38.148284    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:24:38.148299    4318 start.go:255] writing updated cluster config ...
	I0917 10:24:38.170792    4318 out.go:201] 
	I0917 10:24:38.192139    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:38.192258    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.214598    4318 out.go:177] * Starting "ha-744000-m04" worker node in "ha-744000" cluster
	I0917 10:24:38.256637    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:24:38.256664    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:24:38.256839    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:24:38.256857    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:24:38.256981    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.257985    4318 start.go:360] acquireMachinesLock for ha-744000-m04: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:24:38.258078    4318 start.go:364] duration metric: took 72.145µs to acquireMachinesLock for "ha-744000-m04"
	I0917 10:24:38.258103    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:24:38.258112    4318 fix.go:54] fixHost starting: m04
	I0917 10:24:38.258540    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:38.258566    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:38.268106    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51981
	I0917 10:24:38.268448    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:38.268812    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:38.268827    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:38.269077    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:38.269188    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:24:38.269289    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetState
	I0917 10:24:38.269369    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.269469    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 3930
	I0917 10:24:38.270534    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid 3930 missing from process table
	I0917 10:24:38.270552    4318 fix.go:112] recreateIfNeeded on ha-744000-m04: state=Stopped err=<nil>
	I0917 10:24:38.270560    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	W0917 10:24:38.270638    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:24:38.291868    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m04" ...
	I0917 10:24:38.333636    4318 main.go:141] libmachine: (ha-744000-m04) Calling .Start
	I0917 10:24:38.333893    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.333997    4318 main.go:141] libmachine: (ha-744000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid
	I0917 10:24:38.334050    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Using UUID a75a0481-aaf0-49d3-9d6e-de3c56706456
	I0917 10:24:38.361417    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Generated MAC b6:cf:5d:a2:4f:b0
	I0917 10:24:38.361439    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:24:38.361574    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75a0481-aaf0-49d3-9d6e-de3c56706456", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f6270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:24:38.361608    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75a0481-aaf0-49d3-9d6e-de3c56706456", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f6270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:24:38.361683    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a75a0481-aaf0-49d3-9d6e-de3c56706456", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/ha-744000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:24:38.361733    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a75a0481-aaf0-49d3-9d6e-de3c56706456 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/ha-744000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:24:38.361747    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:24:38.363077    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Pid is 4356
	I0917 10:24:38.363455    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Attempt 0
	I0917 10:24:38.363472    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.363519    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 4356
	I0917 10:24:38.365806    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Searching for b6:cf:5d:a2:4f:b0 in /var/db/dhcpd_leases ...
	I0917 10:24:38.365879    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:24:38.365922    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0cb7}
	I0917 10:24:38.365937    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:24:38.365950    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:24:38.365959    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:24:38.365986    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Found match: b6:cf:5d:a2:4f:b0
	I0917 10:24:38.365994    4318 main.go:141] libmachine: (ha-744000-m04) DBG | IP: 192.169.0.8
	I0917 10:24:38.366035    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetConfigRaw
	I0917 10:24:38.366790    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:24:38.367002    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.367474    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:24:38.367487    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:24:38.367618    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:24:38.367733    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:24:38.367825    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:24:38.367932    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:24:38.368026    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:24:38.368135    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:38.368308    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:24:38.368315    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:24:38.371140    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:24:38.380744    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:24:38.381595    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:24:38.381618    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:24:38.381626    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:24:38.381634    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:24:38.766023    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:24:38.766038    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:24:38.880838    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:24:38.880856    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:24:38.880875    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:24:38.880896    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:24:38.881691    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:24:38.881699    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:24:44.498444    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:24:44.498459    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:24:44.498494    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:24:44.523076    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:25:13.428240    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:25:13.428258    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.428409    4318 buildroot.go:166] provisioning hostname "ha-744000-m04"
	I0917 10:25:13.428420    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.428514    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.428620    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.428723    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.428810    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.428889    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.429066    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.429209    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.429217    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m04 && echo "ha-744000-m04" | sudo tee /etc/hostname
	I0917 10:25:13.489074    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m04
	
	I0917 10:25:13.489089    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.489213    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.489306    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.489396    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.489496    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.489633    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.489780    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.489791    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:25:13.545140    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:25:13.545156    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:25:13.545164    4318 buildroot.go:174] setting up certificates
	I0917 10:25:13.545177    4318 provision.go:84] configureAuth start
	I0917 10:25:13.545184    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.545313    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:25:13.545408    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.545491    4318 provision.go:143] copyHostCerts
	I0917 10:25:13.545519    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:25:13.545566    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:25:13.545572    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:25:13.545709    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:25:13.545914    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:25:13.545947    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:25:13.545952    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:25:13.546020    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:25:13.546170    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:25:13.546203    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:25:13.546208    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:25:13.546273    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:25:13.546422    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m04 san=[127.0.0.1 192.169.0.8 ha-744000-m04 localhost minikube]
	I0917 10:25:13.728947    4318 provision.go:177] copyRemoteCerts
	I0917 10:25:13.729001    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:25:13.729019    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.729159    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.729267    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.729352    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.729436    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:13.760341    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:25:13.760415    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:25:13.780212    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:25:13.780295    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:25:13.799969    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:25:13.800048    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:25:13.820126    4318 provision.go:87] duration metric: took 274.938832ms to configureAuth
	I0917 10:25:13.820140    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:25:13.820316    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:25:13.820363    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:13.820492    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.820577    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.820675    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.820756    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.820822    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.820952    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.821086    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.821093    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:25:13.869340    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:25:13.869359    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:25:13.869441    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:25:13.869457    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.869595    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.869683    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.869771    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.869861    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.870006    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.870149    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.870194    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:25:13.929484    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:25:13.929501    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.929632    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.929718    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.929806    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.929887    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.930023    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.930160    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.930175    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:25:15.508327    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:25:15.508343    4318 machine.go:96] duration metric: took 37.140625742s to provisionDockerMachine
	I0917 10:25:15.508350    4318 start.go:293] postStartSetup for "ha-744000-m04" (driver="hyperkit")
	I0917 10:25:15.508359    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:25:15.508370    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.508567    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:25:15.508581    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.508684    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.508771    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.508863    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.508959    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.539960    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:25:15.543053    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:25:15.543063    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:25:15.543160    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:25:15.543298    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:25:15.543305    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:25:15.543461    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:25:15.551517    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:25:15.570767    4318 start.go:296] duration metric: took 62.406299ms for postStartSetup
	I0917 10:25:15.570789    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.570981    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:25:15.570995    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.571091    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.571171    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.571256    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.571333    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.602758    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:25:15.602836    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:25:15.637575    4318 fix.go:56] duration metric: took 37.37922575s for fixHost
	I0917 10:25:15.637622    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.637768    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.637924    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.638031    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.638176    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.638325    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:15.638471    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:15.638479    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:25:15.688928    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593915.722853111
	
	I0917 10:25:15.688940    4318 fix.go:216] guest clock: 1726593915.722853111
	I0917 10:25:15.688945    4318 fix.go:229] Guest: 2024-09-17 10:25:15.722853111 -0700 PDT Remote: 2024-09-17 10:25:15.63759 -0700 PDT m=+131.293327303 (delta=85.263111ms)
	I0917 10:25:15.688955    4318 fix.go:200] guest clock delta is within tolerance: 85.263111ms
	I0917 10:25:15.688959    4318 start.go:83] releasing machines lock for "ha-744000-m04", held for 37.430633857s
	I0917 10:25:15.688978    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.689103    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:25:15.710671    4318 out.go:177] * Found network options:
	I0917 10:25:15.731491    4318 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0917 10:25:15.753310    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.753333    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.753342    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:25:15.753356    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.753871    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.754022    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.754119    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:25:15.754146    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	W0917 10:25:15.754178    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.754208    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.754223    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:25:15.754296    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.754303    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:25:15.754334    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.754432    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.754453    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.754575    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.754604    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.754689    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.754711    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.754792    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	W0917 10:25:15.782647    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:25:15.782713    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:25:15.824742    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:25:15.824761    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:25:15.824849    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:25:15.840222    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:25:15.849242    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:25:15.858317    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:25:15.858387    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:25:15.867462    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:25:15.875738    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:25:15.884682    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:25:15.893510    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:25:15.902446    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:25:15.911295    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:25:15.919994    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:25:15.928900    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:25:15.936904    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:25:15.944894    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:25:16.041231    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:25:16.060721    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:25:16.060799    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:25:16.080747    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:25:16.095004    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:25:16.114244    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:25:16.125786    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:25:16.137258    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:25:16.158423    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:25:16.170393    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:25:16.185414    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:25:16.188334    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:25:16.196827    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:25:16.210659    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:25:16.305554    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:25:16.409957    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:25:16.409982    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:25:16.425083    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:25:16.535715    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:26:17.562416    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.026297453s)
	I0917 10:26:17.562497    4318 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:26:17.630222    4318 out.go:201] 
	W0917 10:26:17.651239    4318 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:25:13 ha-744000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.456528847Z" level=info msg="Starting up"
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.457229245Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.457756278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=515
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.475582216Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490758453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490898800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490976043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491011334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491152047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491195568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491328519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491366944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491397636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491431172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491542048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491732624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493310341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493359335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493488280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493534970Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493652714Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493714896Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494789743Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494871313Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494917161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494950579Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494983897Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495053063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495291226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495375682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495419457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495464742Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495500431Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495531945Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495563543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495597416Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495628537Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495658774Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495687956Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495720478Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495838245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495897691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495950377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495999910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496037282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496068360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496098684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496129402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496180048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496224888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496258746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496292925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496328738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496361060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496398155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496429539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496458278Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496532105Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496577809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496631209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496668767Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496701760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496732507Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496764331Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496955260Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497045520Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497161388Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497218646Z" level=info msg="containerd successfully booted in 0.022496s"
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.478225250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.497615871Z" level=info msg="Loading containers: start."
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.589404703Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.466302251Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.511791263Z" level=info msg="Loading containers: done."
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.521663721Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.521829028Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.541037196Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:25:15 ha-744000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.542461858Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:25:16 ha-744000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.587552960Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588424393Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588788736Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588860910Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588877844Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:25:17 ha-744000-m04 dockerd[1095]: time="2024-09-17T17:25:17.626813653Z" level=info msg="Starting up"
	Sep 17 17:26:17 ha-744000-m04 dockerd[1095]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:26:17.651325    4318 out.go:270] * 
	W0917 10:26:17.652544    4318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:26:17.714012    4318 out.go:201] 
	
	
	==> Docker <==
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.268707916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.281047915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.281247421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.281280865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.281415634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.306942894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.307034217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.307049248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.307123216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.345168645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.345400515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.345417057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.345534846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.371315730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.371503024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.371534239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.371698549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:50 ha-744000 dockerd[1165]: time="2024-09-17T17:24:50.911074437Z" level=info msg="shim disconnected" id=8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2 namespace=moby
	Sep 17 17:24:50 ha-744000 dockerd[1165]: time="2024-09-17T17:24:50.911145697Z" level=warning msg="cleaning up after shim disconnected" id=8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2 namespace=moby
	Sep 17 17:24:50 ha-744000 dockerd[1165]: time="2024-09-17T17:24:50.911154909Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:24:50 ha-744000 dockerd[1159]: time="2024-09-17T17:24:50.911891905Z" level=info msg="ignoring event" container=8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:25:06 ha-744000 dockerd[1165]: time="2024-09-17T17:25:06.183917900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:25:06 ha-744000 dockerd[1165]: time="2024-09-17T17:25:06.184095170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:25:06 ha-744000 dockerd[1165]: time="2024-09-17T17:25:06.184121704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:25:06 ha-744000 dockerd[1165]: time="2024-09-17T17:25:06.184219800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1b95d7a1c7708       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   375cde06a4bcf       storage-provisioner
	079da006755a7       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   f0eee6e67fe42       busybox-7dff88458-cn52t
	9f76145e8eaf7       12968670680f4                                                                                         About a minute ago   Running             kindnet-cni               1                   8b4b5191649e7       kindnet-c59lr
	6a4aba3acb1e9       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   3888ce04e78db       coredns-7c65d6cfc9-khnlh
	8fea3c0c8d014       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   375cde06a4bcf       storage-provisioner
	fb8b83fe49a6e       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   f1782d63db94f       kube-proxy-6xd2h
	24cfd031ec879       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   244f5bc456efc       coredns-7c65d6cfc9-j9jcc
	12b3b4eba9d4b       175ffd71cce3d                                                                                         2 minutes ago        Running             kube-controller-manager   2                   1ec7133566130       kube-controller-manager-ha-744000
	cfbfd57cf2b56       38af8ddebf499                                                                                         2 minutes ago        Running             kube-vip                  0                   433c480eea542       kube-vip-ha-744000
	2e26c6d8d6f01       6bab7719df100                                                                                         2 minutes ago        Running             kube-apiserver            1                   17c507064e8cf       kube-apiserver-ha-744000
	e2a0b2a78de14       175ffd71cce3d                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   1ec7133566130       kube-controller-manager-ha-744000
	a7645ef2ae8dd       9aa1fad941575                                                                                         2 minutes ago        Running             kube-scheduler            1                   fbf79ae31cbab       kube-scheduler-ha-744000
	23a7e0d95a77c       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      1                   55cb3d05ddf34       etcd-ha-744000
	2d870e01d6884       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   35535e8fc0b28       busybox-7dff88458-cn52t
	483eb8f98687f       c69fa2e9cbf5f                                                                                         7 minutes ago        Exited              coredns                   0                   8108990228d29       coredns-7c65d6cfc9-khnlh
	916943d59881d       c69fa2e9cbf5f                                                                                         7 minutes ago        Exited              coredns                   0                   804209193fefd       coredns-7c65d6cfc9-j9jcc
	c585358c16494       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              8 minutes ago        Exited              kindnet-cni               0                   1b8517a154f2d       kindnet-c59lr
	8b4d53aa2a212       60c005f310ff3                                                                                         8 minutes ago        Exited              kube-proxy                0                   7026bc0d7935b       kube-proxy-6xd2h
	b88f9e96fc4a3       9aa1fad941575                                                                                         8 minutes ago        Exited              kube-scheduler            0                   26a4b719c81b3       kube-scheduler-ha-744000
	8d4b19b4762b9       2e96e5913fc06                                                                                         8 minutes ago        Exited              etcd                      0                   d38e9fc592fbb       etcd-ha-744000
	0468a8663a15a       6bab7719df100                                                                                         8 minutes ago        Exited              kube-apiserver            0                   183fe28646c54       kube-apiserver-ha-744000
	
	
	==> coredns [24cfd031ec87] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52682 - 33898 "HINFO IN 2709939145458862568.721558315158165230. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009931439s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[318103159]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.683) (total time: 30003ms):
	Trace[318103159]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:24:50.686)
	Trace[318103159]: [30.003131559s] [30.003131559s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1979128092]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1979128092]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1979128092]: [30.000652416s] [30.000652416s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1978210991]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1978210991]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1978210991]: [30.000766886s] [30.000766886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [483eb8f98687] <==
	[INFO] 10.244.0.4:49921 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001090777s
	[INFO] 10.244.0.4:38072 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093692s
	[INFO] 10.244.0.4:52268 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010201s
	[INFO] 10.244.0.4:39332 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065274s
	[INFO] 10.244.1.2:50067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097272s
	[INFO] 10.244.1.2:59778 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076291s
	[INFO] 10.244.1.2:40527 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006494s
	[INFO] 10.244.1.2:55267 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103302s
	[INFO] 10.244.1.2:48936 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076215s
	[INFO] 10.244.2.2:35568 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075643s
	[INFO] 10.244.2.2:33950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075232s
	[INFO] 10.244.0.4:34208 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090644s
	[INFO] 10.244.0.4:48674 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132902s
	[INFO] 10.244.0.4:33737 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008542s
	[INFO] 10.244.0.4:52920 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144911s
	[INFO] 10.244.1.2:35106 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080607s
	[INFO] 10.244.1.2:56698 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084976s
	[INFO] 10.244.2.2:34296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174512s
	[INFO] 10.244.2.2:33488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117345s
	[INFO] 10.244.0.4:38670 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010498s
	[INFO] 10.244.0.4:40491 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111462s
	[INFO] 10.244.0.4:48717 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000119132s
	[INFO] 10.244.2.2:47158 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110576s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a4aba3acb1e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60360 - 19575 "HINFO IN 3607648931521447410.3411894034218696920. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009401347s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1960564509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1960564509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.746)
	Trace[1960564509]: [30.00213331s] [30.00213331s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1197674287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1197674287]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[1197674287]: [30.002759704s] [30.002759704s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[633118280]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30003ms):
	Trace[633118280]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[633118280]: [30.003193097s] [30.003193097s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [916943d59881] <==
	[INFO] 10.244.0.4:37739 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098877s
	[INFO] 10.244.0.4:40547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091048s
	[INFO] 10.244.1.2:44593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150145s
	[INFO] 10.244.1.2:56172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000115318s
	[INFO] 10.244.1.2:39487 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042632s
	[INFO] 10.244.2.2:45820 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136035s
	[INFO] 10.244.2.2:45888 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124378s
	[INFO] 10.244.2.2:33921 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103985s
	[INFO] 10.244.2.2:43324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000079133s
	[INFO] 10.244.2.2:40281 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099458s
	[INFO] 10.244.2.2:55515 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064602s
	[INFO] 10.244.1.2:35470 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094431s
	[INFO] 10.244.1.2:39318 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101905s
	[INFO] 10.244.2.2:33069 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125468s
	[INFO] 10.244.2.2:58055 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005s
	[INFO] 10.244.0.4:42955 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119337s
	[INFO] 10.244.1.2:56148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133985s
	[INFO] 10.244.1.2:41074 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070637s
	[INFO] 10.244.1.2:57011 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097568s
	[INFO] 10.244.1.2:54560 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000088217s
	[INFO] 10.244.2.2:40699 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009838s
	[INFO] 10.244.2.2:56915 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009188s
	[INFO] 10.244.2.2:59087 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000063136s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-744000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-744000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-744000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T10_18_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:18:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-744000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:26:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:24:01 +0000   Tue, 17 Sep 2024 17:18:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:24:01 +0000   Tue, 17 Sep 2024 17:18:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:24:01 +0000   Tue, 17 Sep 2024 17:18:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:24:01 +0000   Tue, 17 Sep 2024 17:18:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-744000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e19ab4b42d3d4ad9a9c9862970c0a605
	  System UUID:                bcb541bd-0000-0000-81db-c015832629bb
	  Boot ID:                    3e522cae-7866-41e9-a155-4d8cabdebe35
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cn52t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 coredns-7c65d6cfc9-j9jcc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m9s
	  kube-system                 coredns-7c65d6cfc9-khnlh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m9s
	  kube-system                 etcd-ha-744000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m13s
	  kube-system                 kindnet-c59lr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m10s
	  kube-system                 kube-apiserver-ha-744000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-controller-manager-ha-744000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-proxy-6xd2h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-scheduler-ha-744000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-vip-ha-744000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m8s                   kube-proxy       
	  Normal  Starting                 118s                   kube-proxy       
	  Normal  Starting                 8m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m20s)  kubelet          Node ha-744000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m20s)  kubelet          Node ha-744000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m19s (x7 over 8m20s)  kubelet          Node ha-744000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m13s                  kubelet          Node ha-744000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m13s                  kubelet          Node ha-744000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m13s                  kubelet          Node ha-744000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m13s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m10s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  NodeReady                7m49s                  kubelet          Node ha-744000 status is now: NodeReady
	  Normal  RegisteredNode           7m10s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  Starting                 2m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m55s (x8 over 2m55s)  kubelet          Node ha-744000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s (x8 over 2m55s)  kubelet          Node ha-744000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x7 over 2m55s)  kubelet          Node ha-744000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m23s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  RegisteredNode           113s                   node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	
	
	Name:               ha-744000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-744000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-744000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T10_19_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:19:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-744000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:26:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:23:54 +0000   Tue, 17 Sep 2024 17:19:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:23:54 +0000   Tue, 17 Sep 2024 17:19:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:23:54 +0000   Tue, 17 Sep 2024 17:19:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:23:54 +0000   Tue, 17 Sep 2024 17:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-744000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 c94aa5595d5f4a1cb88c3b118576895e
	  System UUID:                84414fed-0000-0000-a88c-11fa06a6299e
	  Boot ID:                    11a7e2f2-378b-40ca-b409-09a9376b68fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qcdwg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 etcd-ha-744000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m15s
	  kube-system                 kindnet-r77t5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m17s
	  kube-system                 kube-apiserver-ha-744000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-controller-manager-ha-744000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-proxy-k9xsp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 kube-scheduler-ha-744000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-vip-ha-744000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m9s                   kube-proxy       
	  Normal   Starting                 4m                     kube-proxy       
	  Normal   Starting                 7m13s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  7m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     7m17s                  cidrAllocator    Node ha-744000-m02 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  7m17s (x8 over 7m17s)  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m17s (x8 over 7m17s)  kubelet          Node ha-744000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m17s (x7 over 7m17s)  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m14s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   RegisteredNode           7m10s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   RegisteredNode           6m2s                   node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Warning  Rebooted                 4m4s                   kubelet          Node ha-744000-m02 has been rebooted, boot id: 820b0469-454f-41f2-99e6-1215d352a125
	  Normal   Starting                 4m4s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m4s                   kubelet          Node ha-744000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m4s                   kubelet          Node ha-744000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m4s                   kubelet          Node ha-744000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m56s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node ha-744000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x7 over 2m35s)  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m23s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   RegisteredNode           2m8s                   node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   RegisteredNode           113s                   node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	
	
	Name:               ha-744000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-744000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-744000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T10_20_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:20:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-744000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:26:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:24:19 +0000   Tue, 17 Sep 2024 17:20:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:24:19 +0000   Tue, 17 Sep 2024 17:20:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:24:19 +0000   Tue, 17 Sep 2024 17:20:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:24:19 +0000   Tue, 17 Sep 2024 17:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-744000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e69ad22bc724d5dbc6622b886e4d520
	  System UUID:                26294a36-0000-0000-a6bd-c4320ca3711f
	  Boot ID:                    c6523479-0bc3-4adc-8151-bdc3d51dbc75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qcq64                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 etcd-ha-744000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m8s
	  kube-system                 kindnet-bdjj4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-744000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-controller-manager-ha-744000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-c5xbc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-744000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-744000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 117s                   kube-proxy       
	  Normal   Starting                 6m6s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     6m10s                  cidrAllocator    Node ha-744000-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  6m10s (x8 over 6m10s)  kubelet          Node ha-744000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m10s (x8 over 6m10s)  kubelet          Node ha-744000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m10s (x7 over 6m10s)  kubelet          Node ha-744000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m9s                   node-controller  Node ha-744000-m03 event: Registered Node ha-744000-m03 in Controller
	  Normal   RegisteredNode           6m5s                   node-controller  Node ha-744000-m03 event: Registered Node ha-744000-m03 in Controller
	  Normal   RegisteredNode           6m2s                   node-controller  Node ha-744000-m03 event: Registered Node ha-744000-m03 in Controller
	  Normal   RegisteredNode           3m56s                  node-controller  Node ha-744000-m03 event: Registered Node ha-744000-m03 in Controller
	  Normal   RegisteredNode           2m23s                  node-controller  Node ha-744000-m03 event: Registered Node ha-744000-m03 in Controller
	  Normal   RegisteredNode           2m8s                   node-controller  Node ha-744000-m03 event: Registered Node ha-744000-m03 in Controller
	  Normal   Starting                 2m                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m                     kubelet          Node ha-744000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m                     kubelet          Node ha-744000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                     kubelet          Node ha-744000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m                     kubelet          Node ha-744000-m03 has been rebooted, boot id: c6523479-0bc3-4adc-8151-bdc3d51dbc75
	  Normal   RegisteredNode           113s                   node-controller  Node ha-744000-m03 event: Registered Node ha-744000-m03 in Controller
	
	
	Name:               ha-744000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-744000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-744000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T10_21_08_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:21:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-744000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:22:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 17 Sep 2024 17:21:37 +0000   Tue, 17 Sep 2024 17:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 17 Sep 2024 17:21:37 +0000   Tue, 17 Sep 2024 17:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 17 Sep 2024 17:21:37 +0000   Tue, 17 Sep 2024 17:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 17 Sep 2024 17:21:37 +0000   Tue, 17 Sep 2024 17:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-744000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 915aa5fe15514f39b3c6acca73576405
	  System UUID:                a75a49d3-0000-0000-9d6e-de3c56706456
	  Boot ID:                    1f7e3f8e-fb14-42e9-8e73-84c9c5c4de7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wqkz7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m12s
	  kube-system                 kube-proxy-66bkb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m5s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m12s (x2 over 5m13s)  kubelet          Node ha-744000-m04 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     5m12s                  cidrAllocator    Node ha-744000-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     5m12s                  cidrAllocator    Node ha-744000-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientPID     5m12s (x2 over 5m13s)  kubelet          Node ha-744000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m12s (x2 over 5m13s)  kubelet          Node ha-744000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  NodeReady                4m50s                  kubelet          Node ha-744000-m04 status is now: NodeReady
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           2m23s                  node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           113s                   node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-744000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035497] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007988] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.713580] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007483] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.860704] systemd-fstab-generator[126]: Ignoring "noauto" option for root device
	[  +1.323097] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.568053] systemd-fstab-generator[470]: Ignoring "noauto" option for root device
	[  +0.088340] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +1.270007] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.699814] systemd-fstab-generator[1089]: Ignoring "noauto" option for root device
	[  +0.246160] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.113601] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +0.118007] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +2.448707] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.103404] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.099552] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.136889] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.447009] systemd-fstab-generator[1564]: Ignoring "noauto" option for root device
	[  +6.924512] kauditd_printk_skb: 271 callbacks suppressed
	[ +22.054441] kauditd_printk_skb: 40 callbacks suppressed
	[Sep17 17:24] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [23a7e0d95a77] <==
	{"level":"warn","ts":"2024-09-17T17:24:07.953209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:07.957518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:07.962474Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:07.963109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:08.063268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:08.164017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:08.831479Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"557d957d9f2c237a","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:08.831734Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"557d957d9f2c237a","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:11.317678Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"557d957d9f2c237a","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:11.317815Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"557d957d9f2c237a","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:12.833594Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"557d957d9f2c237a","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:12.833967Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"557d957d9f2c237a","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:16.318218Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"557d957d9f2c237a","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:16.318244Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"557d957d9f2c237a","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:16.841328Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"557d957d9f2c237a","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:16.841777Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"557d957d9f2c237a","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-17T17:24:20.424803Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:24:20.428661Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:24:20.428974Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:24:20.516533Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"557d957d9f2c237a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-17T17:24:20.516577Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:24:20.547191Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"557d957d9f2c237a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-17T17:24:20.549649Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"warn","ts":"2024-09-17T17:24:21.318628Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"557d957d9f2c237a","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:21.318722Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"557d957d9f2c237a","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	
	
	==> etcd [8d4b19b4762b] <==
	2024/09/17 17:22:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:22:56.451985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.563767123s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:22:56.451994Z","caller":"traceutil/trace.go:171","msg":"trace[313034458] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; }","duration":"5.563777996s","start":"2024-09-17T17:22:50.888214Z","end":"2024-09-17T17:22:56.451992Z","steps":["trace[313034458] 'agreement among raft nodes before linearized reading'  (duration: 5.563767198s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:22:56.452003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:22:50.888192Z","time spent":"5.563809109s","remote":"127.0.0.1:56830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":0,"request content":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true "}
	2024/09/17 17:22:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:22:56.480665Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:22:56.480691Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T17:22:56.480721Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T17:22:56.480823Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.480834Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.480849Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484198Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484258Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484324Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484373Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484412Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.484422Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.484436Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.485143Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.485195Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.485239Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.485269Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.489683Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:22:56.489807Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:22:56.489816Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-744000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:26:20 up 3 min,  0 users,  load average: 0.45, 0.35, 0.14
	Linux ha-744000 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9f76145e8eaf] <==
	I0917 17:25:41.506283       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:25:51.512701       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:25:51.512756       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:25:51.513005       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:25:51.513016       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:25:51.513634       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:25:51.513673       1 main.go:299] handling current node
	I0917 17:25:51.513683       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:25:51.513690       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:01.506984       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:01.507236       1 main.go:299] handling current node
	I0917 17:26:01.507470       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:01.507663       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:01.508002       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:01.508317       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:01.508523       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:01.508659       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:11.511261       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:11.511335       1 main.go:299] handling current node
	I0917 17:26:11.511353       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:11.511367       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:11.512152       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:11.512248       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:11.512772       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:11.512871       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [c585358c1649] <==
	I0917 17:22:25.536515       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:22:35.538913       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:22:35.539072       1 main.go:299] handling current node
	I0917 17:22:35.539097       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:22:35.539156       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:22:35.539326       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:22:35.539442       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:22:35.539599       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:22:35.539711       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:22:45.538117       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:22:45.538187       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:22:45.538682       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:22:45.538745       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:22:45.538817       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:22:45.538826       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:22:45.539211       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:22:45.539277       1 main.go:299] handling current node
	I0917 17:22:55.537919       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:22:55.537958       1 main.go:299] handling current node
	I0917 17:22:55.538082       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:22:55.538164       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:22:55.541016       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:22:55.541068       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:22:55.541176       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:22:55.541204       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [0468a8663a15] <==
	W0917 17:22:56.472516       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472542       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472566       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472591       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472619       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472645       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472671       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472697       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472722       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472761       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472797       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0917 17:22:56.472868       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00d2f0210)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	E0917 17:22:56.472958       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.478008       1 controller.go:163] "Unhandled Error" err="unable to sync kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	E0917 17:22:56.478339       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.478396       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.478410       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0917 17:22:56.478623       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0917 17:22:56.478774       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00d2f0220)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	E0917 17:22:56.478935       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0917 17:22:56.479083       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0917 17:22:56.479195       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.479244       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.480492       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	I0917 17:22:56.494593       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [2e26c6d8d6f0] <==
	I0917 17:23:52.297690       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0917 17:23:52.297698       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0917 17:23:52.442975       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 17:23:52.450426       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:23:52.450605       1 policy_source.go:224] refreshing policies
	I0917 17:23:52.475151       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 17:23:52.476021       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 17:23:52.476815       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 17:23:52.476953       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 17:23:52.477453       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 17:23:52.477542       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 17:23:52.479434       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 17:23:52.483086       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 17:23:52.483434       1 aggregator.go:171] initial CRD sync complete...
	I0917 17:23:52.483528       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 17:23:52.483600       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 17:23:52.483707       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:23:52.484124       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0917 17:23:52.486549       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0917 17:23:52.488389       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 17:23:52.492209       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 17:23:52.498932       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 17:23:52.503018       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 17:23:53.290215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 17:23:53.614881       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [12b3b4eba9d4] <==
	I0917 17:24:19.356893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.000871ms"
	I0917 17:24:19.357207       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="267.968µs"
	I0917 17:24:20.624243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.206175ms"
	I0917 17:24:20.624557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="263.213µs"
	I0917 17:24:20.837058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43µs"
	I0917 17:24:21.902729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.285186ms"
	I0917 17:24:21.902872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.727µs"
	I0917 17:24:21.928959       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="123.831µs"
	I0917 17:24:21.939400       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xhksf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xhksf\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 17:24:21.939903       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e8e8504b-8b6f-4ef7-808e-297a73c11a8b", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xhksf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xhksf": the object has been modified; please apply your changes to the latest version and try again
	I0917 17:24:22.798492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.999368ms"
	I0917 17:24:22.798935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="273.004µs"
	I0917 17:24:36.673349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-744000-m04"
	I0917 17:24:36.687585       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-744000-m04"
	I0917 17:24:36.784621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-744000-m04"
	I0917 17:24:41.567880       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-744000-m04"
	I0917 17:24:59.455998       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xhksf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xhksf\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 17:24:59.456279       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e8e8504b-8b6f-4ef7-808e-297a73c11a8b", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xhksf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xhksf": the object has been modified; please apply your changes to the latest version and try again
	I0917 17:24:59.488571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.273887ms"
	I0917 17:24:59.517216       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xhksf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xhksf\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 17:24:59.517627       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e8e8504b-8b6f-4ef7-808e-297a73c11a8b", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xhksf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xhksf": the object has been modified; please apply your changes to the latest version and try again
	I0917 17:24:59.531517       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.700362ms"
	I0917 17:24:59.531770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.529µs"
	I0917 17:24:59.570271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="29.057272ms"
	I0917 17:24:59.570630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.874µs"
	
	
	==> kube-controller-manager [e2a0b2a78de1] <==
	I0917 17:23:32.303312       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:23:32.693827       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:23:32.693863       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:23:32.695653       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:23:32.695777       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:23:32.696039       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:23:32.696052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 17:23:52.700852       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [8b4d53aa2a21] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:18:11.351196       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:18:11.358110       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 17:18:11.358182       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:18:11.422753       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:18:11.422782       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:18:11.422800       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:18:11.425522       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:18:11.425930       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:18:11.426022       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:18:11.427003       1 config.go:199] "Starting service config controller"
	I0917 17:18:11.427067       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:18:11.427147       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:18:11.427190       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:18:11.428338       1 config.go:328] "Starting node config controller"
	I0917 17:18:11.428397       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:18:11.529109       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:18:11.529170       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:18:11.529199       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fb8b83fe49a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:24:21.123827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:24:21.146583       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 17:24:21.146876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:24:21.179243       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:24:21.179464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:24:21.179596       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:24:21.183190       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:24:21.184459       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:24:21.184543       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:24:21.188244       1 config.go:199] "Starting service config controller"
	I0917 17:24:21.188350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:24:21.188588       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:24:21.188659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:24:21.192108       1 config.go:328] "Starting node config controller"
	I0917 17:24:21.192216       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:24:21.289888       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:24:21.289903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:24:21.293411       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7645ef2ae8d] <==
	W0917 17:23:52.361884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 17:23:52.361916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.361961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:23:52.361995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:23:52.362165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:23:52.362240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:23:52.362314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:23:52.362490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:23:52.362567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:23:52.362640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:23:52.362799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 17:23:53.372962       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b88f9e96fc4a] <==
	E0917 17:20:36.496971       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c9b889c7-d588-4f6b-b31b-3c8f1e40d87a(default/busybox-7dff88458-qcq64) was assumed on ha-744000-m02 but assigned to ha-744000-m03" pod="default/busybox-7dff88458-qcq64"
	E0917 17:20:36.497061       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qcq64\": pod busybox-7dff88458-qcq64 is already assigned to node \"ha-744000-m03\"" pod="default/busybox-7dff88458-qcq64"
	I0917 17:20:36.497317       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qcq64" node="ha-744000-m03"
	I0917 17:20:36.501897       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="1a86846a-5461-4020-b90c-f3dd17823fa1" pod="default/busybox-7dff88458-zg4mr" assumedNode="ha-744000" currentNode="ha-744000-m03"
	E0917 17:20:36.509943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zg4mr\": pod busybox-7dff88458-zg4mr is already assigned to node \"ha-744000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-zg4mr" node="ha-744000-m03"
	E0917 17:20:36.513236       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1a86846a-5461-4020-b90c-f3dd17823fa1(default/busybox-7dff88458-zg4mr) was assumed on ha-744000-m03 but assigned to ha-744000" pod="default/busybox-7dff88458-zg4mr"
	E0917 17:20:36.513426       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zg4mr\": pod busybox-7dff88458-zg4mr is already assigned to node \"ha-744000\"" pod="default/busybox-7dff88458-zg4mr"
	I0917 17:20:36.513506       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-zg4mr" node="ha-744000"
	E0917 17:21:07.475850       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-66bkb\": pod kube-proxy-66bkb is already assigned to node \"ha-744000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-66bkb" node="ha-744000-m04"
	E0917 17:21:07.476183       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wqkz7\": pod kindnet-wqkz7 is already assigned to node \"ha-744000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wqkz7" node="ha-744000-m04"
	E0917 17:21:07.477315       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7821858b-abb3-4eb3-9046-f58a13f48267(kube-system/kube-proxy-66bkb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-66bkb"
	E0917 17:21:07.477361       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-66bkb\": pod kube-proxy-66bkb is already assigned to node \"ha-744000-m04\"" pod="kube-system/kube-proxy-66bkb"
	I0917 17:21:07.477405       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-66bkb" node="ha-744000-m04"
	E0917 17:21:07.481780       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7e9ecf5e-795d-401b-91e5-7b713e07415f(kube-system/kindnet-wqkz7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-wqkz7"
	E0917 17:21:07.481854       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wqkz7\": pod kindnet-wqkz7 is already assigned to node \"ha-744000-m04\"" pod="kube-system/kindnet-wqkz7"
	I0917 17:21:07.481873       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wqkz7" node="ha-744000-m04"
	E0917 17:21:07.500320       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-njxt8\": pod kindnet-njxt8 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-njxt8" node="ha-744000-m04"
	E0917 17:21:07.500421       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-njxt8\": pod kindnet-njxt8 is being deleted, cannot be assigned to a host" pod="kube-system/kindnet-njxt8"
	E0917 17:21:07.500768       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s4wh8\": pod kube-proxy-s4wh8 is already assigned to node \"ha-744000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s4wh8" node="ha-744000-m04"
	E0917 17:21:07.501164       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s4wh8\": pod kube-proxy-s4wh8 is already assigned to node \"ha-744000-m04\"" pod="kube-system/kube-proxy-s4wh8"
	I0917 17:21:07.501336       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s4wh8" node="ha-744000-m04"
	I0917 17:22:56.486998       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 17:22:56.488377       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 17:22:56.488567       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 17:22:56.501920       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 17:24:19 ha-744000 kubelet[1571]: I0917 17:24:19.229887    1571 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 17 17:24:19 ha-744000 kubelet[1571]: I0917 17:24:19.332428    1571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-744000" podStartSLOduration=0.332413468 podStartE2EDuration="332.413468ms" podCreationTimestamp="2024-09-17 17:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-17 17:24:19.305657396 +0000 UTC m=+55.452960660" watchObservedRunningTime="2024-09-17 17:24:19.332413468 +0000 UTC m=+55.479716727"
	Sep 17 17:24:19 ha-744000 kubelet[1571]: I0917 17:24:19.874167    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="244f5bc456efc23de09015fc6015db115eb68cf29ad04c42a93b23684af2b656"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.027395    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="375cde06a4bcf85c8dbe4a95a59038a5edcc5b669ebdf09711f65a6ec4ccdd5d"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.140048    1571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77d835d88cdbe1f51752629e47476158" path="/var/lib/kubelet/pods/77d835d88cdbe1f51752629e47476158/volumes"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.703044    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1782d63db94f350b5edabaff3845d7885d001cd575956e68ea4ab801acefc5b"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.712344    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b4b5191649e7e23e89a07879b4f0adaac0597f0bf423d115837c82fc418492c"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.803666    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0eee6e67fe42b4371fb56c6ecb297d3e69a4cba74a5270a6664b8feaeae27e3"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.817656    1571 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-744000" podUID="4613d53e-c3b7-48eb-bb87-057beab671e7"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.818111    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3888ce04e78dbb34e516e447734d2814db5be0d6808e1f32db4bbbdf86597bc4"
	Sep 17 17:24:24 ha-744000 kubelet[1571]: I0917 17:24:24.111673    1571 scope.go:117] "RemoveContainer" containerID="c938e7f2f1d48167ceab0b28c2510958f5ed8c527865274d730fb6a34c68d6fc"
	Sep 17 17:24:24 ha-744000 kubelet[1571]: E0917 17:24:24.159210    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:24:24 ha-744000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:24:24 ha-744000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:24:24 ha-744000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:24:24 ha-744000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:24:51 ha-744000 kubelet[1571]: I0917 17:24:51.193772    1571 scope.go:117] "RemoveContainer" containerID="7614d753e30b082bbb245659759587cc678073082201f9c648429b0e86eb7f3d"
	Sep 17 17:24:51 ha-744000 kubelet[1571]: I0917 17:24:51.193990    1571 scope.go:117] "RemoveContainer" containerID="8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2"
	Sep 17 17:24:51 ha-744000 kubelet[1571]: E0917 17:24:51.194071    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9c968c58-13fc-40ef-8098-3b66787272db)\"" pod="kube-system/storage-provisioner" podUID="9c968c58-13fc-40ef-8098-3b66787272db"
	Sep 17 17:25:06 ha-744000 kubelet[1571]: I0917 17:25:06.126176    1571 scope.go:117] "RemoveContainer" containerID="8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2"
	Sep 17 17:25:24 ha-744000 kubelet[1571]: E0917 17:25:24.147461    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:25:24 ha-744000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:25:24 ha-744000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:25:24 ha-744000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:25:24 ha-744000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-744000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (224.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 node delete m03 -v=7 --alsologtostderr: (7.061167252s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr: exit status 2 (341.859469ms)

                                                
                                                
-- stdout --
	ha-744000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-744000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-744000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:26:29.142575    4400 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:26:29.142766    4400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:29.142771    4400 out.go:358] Setting ErrFile to fd 2...
	I0917 10:26:29.142775    4400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:29.142957    4400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:26:29.143139    4400 out.go:352] Setting JSON to false
	I0917 10:26:29.143162    4400 mustload.go:65] Loading cluster: ha-744000
	I0917 10:26:29.143229    4400 notify.go:220] Checking for updates...
	I0917 10:26:29.143552    4400 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:26:29.143566    4400 status.go:255] checking status of ha-744000 ...
	I0917 10:26:29.144012    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.144065    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.153392    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52064
	I0917 10:26:29.153759    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.154163    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.154175    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.154385    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.154503    4400 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:26:29.154594    4400 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:29.154667    4400 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:26:29.155728    4400 status.go:330] ha-744000 host status = "Running" (err=<nil>)
	I0917 10:26:29.155744    4400 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:26:29.156007    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.156033    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.164448    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52066
	I0917 10:26:29.164808    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.165132    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.165143    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.165366    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.165478    4400 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:26:29.165559    4400 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:26:29.165821    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.165843    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.177381    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52068
	I0917 10:26:29.177717    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.178049    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.178063    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.178297    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.178416    4400 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:29.178555    4400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:26:29.178577    4400 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:26:29.178657    4400 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:26:29.178738    4400 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:29.178819    4400 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:26:29.178898    4400 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:26:29.209362    4400 ssh_runner.go:195] Run: systemctl --version
	I0917 10:26:29.214130    4400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:26:29.225549    4400 kubeconfig.go:125] found "ha-744000" server: "https://192.169.0.254:8443"
	I0917 10:26:29.225570    4400 api_server.go:166] Checking apiserver status ...
	I0917 10:26:29.225642    4400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:26:29.237397    4400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2033/cgroup
	W0917 10:26:29.245222    4400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2033/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:26:29.245270    4400 ssh_runner.go:195] Run: ls
	I0917 10:26:29.248368    4400 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0917 10:26:29.251457    4400 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0917 10:26:29.251468    4400 status.go:422] ha-744000 apiserver status = Running (err=<nil>)
	I0917 10:26:29.251476    4400 status.go:257] ha-744000 status: &{Name:ha-744000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:26:29.251488    4400 status.go:255] checking status of ha-744000-m02 ...
	I0917 10:26:29.251758    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.251780    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.260560    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52072
	I0917 10:26:29.260912    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.261245    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.261257    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.261461    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.261575    4400 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:26:29.261656    4400 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:29.261739    4400 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:26:29.262832    4400 status.go:330] ha-744000-m02 host status = "Running" (err=<nil>)
	I0917 10:26:29.262842    4400 host.go:66] Checking if "ha-744000-m02" exists ...
	I0917 10:26:29.263090    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.263113    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.271669    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52074
	I0917 10:26:29.272010    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.272366    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.272383    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.272592    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.272695    4400 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:26:29.272786    4400 host.go:66] Checking if "ha-744000-m02" exists ...
	I0917 10:26:29.273060    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.273083    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.281718    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52076
	I0917 10:26:29.282058    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.282391    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.282408    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.282638    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.282750    4400 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:26:29.282890    4400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:26:29.282903    4400 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:26:29.282983    4400 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:26:29.283064    4400 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:26:29.283147    4400 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:26:29.283220    4400 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:26:29.318725    4400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:26:29.330222    4400 kubeconfig.go:125] found "ha-744000" server: "https://192.169.0.254:8443"
	I0917 10:26:29.330236    4400 api_server.go:166] Checking apiserver status ...
	I0917 10:26:29.330277    4400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:26:29.341801    4400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2068/cgroup
	W0917 10:26:29.349822    4400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2068/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:26:29.349872    4400 ssh_runner.go:195] Run: ls
	I0917 10:26:29.353130    4400 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0917 10:26:29.356239    4400 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0917 10:26:29.356249    4400 status.go:422] ha-744000-m02 apiserver status = Running (err=<nil>)
	I0917 10:26:29.356257    4400 status.go:257] ha-744000-m02 status: &{Name:ha-744000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:26:29.356267    4400 status.go:255] checking status of ha-744000-m04 ...
	I0917 10:26:29.356536    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.356556    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.365148    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52080
	I0917 10:26:29.365498    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.365852    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.365866    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.366078    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.366193    4400 main.go:141] libmachine: (ha-744000-m04) Calling .GetState
	I0917 10:26:29.366274    4400 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:29.366358    4400 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 4356
	I0917 10:26:29.367452    4400 status.go:330] ha-744000-m04 host status = "Running" (err=<nil>)
	I0917 10:26:29.367462    4400 host.go:66] Checking if "ha-744000-m04" exists ...
	I0917 10:26:29.367717    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.367760    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.376185    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52082
	I0917 10:26:29.376519    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.376878    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.376893    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.377094    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.377193    4400 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:26:29.377276    4400 host.go:66] Checking if "ha-744000-m04" exists ...
	I0917 10:26:29.377547    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:29.377568    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:29.386313    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52084
	I0917 10:26:29.386671    4400 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:29.387005    4400 main.go:141] libmachine: Using API Version  1
	I0917 10:26:29.387016    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:29.387239    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:29.387360    4400 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:26:29.387511    4400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:26:29.387524    4400 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:26:29.387603    4400 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:26:29.387715    4400 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:26:29.387814    4400 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:26:29.387904    4400 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:26:29.416460    4400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:26:29.427699    4400 status.go:257] ha-744000-m04 status: &{Name:ha-744000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 logs -n 25: (3.050841868s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m02 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m03_ha-744000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m04 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp testdata/cp-test.txt                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000:/home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000 sudo cat                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m02:/home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m02 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03:/home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m03 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-744000 node stop m02 -v=7                                                                                                 | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-744000 node start m02 -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:22 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000 -v=7                                                                                                       | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-744000 -v=7                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT | 17 Sep 24 10:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:23 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	| node    | ha-744000 node delete m03 -v=7                                                                                               | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 10:23:04
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 10:23:04.382852    4318 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:23:04.383033    4318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:23:04.383038    4318 out.go:358] Setting ErrFile to fd 2...
	I0917 10:23:04.383042    4318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:23:04.383233    4318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:23:04.384637    4318 out.go:352] Setting JSON to false
	I0917 10:23:04.410020    4318 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3151,"bootTime":1726590633,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:23:04.410173    4318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:23:04.431516    4318 out.go:177] * [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:23:04.474507    4318 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:23:04.474563    4318 notify.go:220] Checking for updates...
	I0917 10:23:04.517356    4318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:04.538348    4318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:23:04.559339    4318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:23:04.580471    4318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:23:04.622325    4318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:23:04.644148    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:04.644323    4318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:23:04.645084    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.645147    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:04.654766    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51897
	I0917 10:23:04.655119    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:04.655514    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:04.655526    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:04.655751    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:04.655871    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.684288    4318 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:23:04.726365    4318 start.go:297] selected driver: hyperkit
	I0917 10:23:04.726395    4318 start.go:901] validating driver "hyperkit" against &{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:04.726649    4318 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:23:04.726838    4318 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:23:04.727063    4318 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:23:04.736820    4318 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:23:04.742830    4318 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.742852    4318 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:23:04.746401    4318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:23:04.746441    4318 cni.go:84] Creating CNI manager for ""
	I0917 10:23:04.746483    4318 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 10:23:04.746565    4318 start.go:340] cluster config:
	{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:04.746687    4318 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:23:04.789252    4318 out.go:177] * Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	I0917 10:23:04.810326    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:04.810440    4318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:23:04.810514    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:04.810708    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:04.810727    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:04.810905    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:04.811872    4318 start.go:360] acquireMachinesLock for ha-744000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:04.811982    4318 start.go:364] duration metric: took 85.186µs to acquireMachinesLock for "ha-744000"
	I0917 10:23:04.812017    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:04.812036    4318 fix.go:54] fixHost starting: 
	I0917 10:23:04.812477    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:04.812504    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:04.821489    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51899
	I0917 10:23:04.821836    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:04.822180    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:04.822195    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:04.822406    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:04.822525    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.822647    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:23:04.822729    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.822838    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 3812
	I0917 10:23:04.823848    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 3812 missing from process table
	I0917 10:23:04.823907    4318 fix.go:112] recreateIfNeeded on ha-744000: state=Stopped err=<nil>
	I0917 10:23:04.823932    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	W0917 10:23:04.824033    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:04.845116    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000" ...
	I0917 10:23:04.866254    4318 main.go:141] libmachine: (ha-744000) Calling .Start
	I0917 10:23:04.866533    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.866553    4318 main.go:141] libmachine: (ha-744000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid
	I0917 10:23:04.868308    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 3812 missing from process table
	I0917 10:23:04.868320    4318 main.go:141] libmachine: (ha-744000) DBG | pid 3812 is in state "Stopped"
	I0917 10:23:04.868338    4318 main.go:141] libmachine: (ha-744000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid...
	I0917 10:23:04.868639    4318 main.go:141] libmachine: (ha-744000) DBG | Using UUID bcb5b96f-4d12-41bd-81db-c015832629bb
	I0917 10:23:04.980045    4318 main.go:141] libmachine: (ha-744000) DBG | Generated MAC 36:e3:93:ff:24:96
	I0917 10:23:04.980073    4318 main.go:141] libmachine: (ha-744000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:04.980180    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfce0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:04.980209    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfce0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:04.980265    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bcb5b96f-4d12-41bd-81db-c015832629bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:04.980311    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bcb5b96f-4d12-41bd-81db-c015832629bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:04.980327    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:04.981797    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 DEBUG: hyperkit: Pid is 4331
	I0917 10:23:04.982233    4318 main.go:141] libmachine: (ha-744000) DBG | Attempt 0
	I0917 10:23:04.982246    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:04.982323    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:23:04.983974    4318 main.go:141] libmachine: (ha-744000) DBG | Searching for 36:e3:93:ff:24:96 in /var/db/dhcpd_leases ...
	I0917 10:23:04.984040    4318 main.go:141] libmachine: (ha-744000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:04.984071    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:04.984087    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c3c}
	I0917 10:23:04.984115    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0ba8}
	I0917 10:23:04.984133    4318 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0b36}
	I0917 10:23:04.984146    4318 main.go:141] libmachine: (ha-744000) DBG | Found match: 36:e3:93:ff:24:96
	I0917 10:23:04.984156    4318 main.go:141] libmachine: (ha-744000) DBG | IP: 192.169.0.5
	I0917 10:23:04.984188    4318 main.go:141] libmachine: (ha-744000) Calling .GetConfigRaw
	I0917 10:23:04.984817    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:04.984996    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:04.985438    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:04.985457    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:04.985603    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:04.985698    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:04.985789    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:04.985886    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:04.985975    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:04.986095    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:04.986288    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:04.986295    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:04.989700    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:05.044525    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:05.045631    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:05.045647    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:05.045654    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:05.045662    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:05.426657    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:05.426678    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:05.541316    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:05.541359    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:05.541371    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:05.541450    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:05.542317    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:05.542326    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:23:11.152568    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:23:11.152612    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:23:11.152621    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:23:11.176948    4318 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:23:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:23:14.298215    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0917 10:23:17.357957    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:23:17.357984    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.358136    4318 buildroot.go:166] provisioning hostname "ha-744000"
	I0917 10:23:17.358148    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.358261    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.358357    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.358444    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.358547    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.358661    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.358802    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.358948    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.358957    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000 && echo "ha-744000" | sudo tee /etc/hostname
	I0917 10:23:17.423407    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000
	
	I0917 10:23:17.423427    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.423563    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.423676    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.423778    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.423878    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.424023    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.424163    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.424174    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:23:17.486445    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:23:17.486467    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:23:17.486482    4318 buildroot.go:174] setting up certificates
	I0917 10:23:17.486490    4318 provision.go:84] configureAuth start
	I0917 10:23:17.486499    4318 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:23:17.486623    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:17.486725    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.486807    4318 provision.go:143] copyHostCerts
	I0917 10:23:17.486836    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:17.486889    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:23:17.486897    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:17.487028    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:23:17.487256    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:17.487285    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:23:17.487290    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:17.487357    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:23:17.487493    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:17.487527    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:23:17.487531    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:17.487595    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:23:17.487731    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000 san=[127.0.0.1 192.169.0.5 ha-744000 localhost minikube]
	I0917 10:23:17.613185    4318 provision.go:177] copyRemoteCerts
	I0917 10:23:17.613267    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:23:17.613292    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.613443    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.613545    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.613632    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.613733    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:17.649429    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:23:17.649501    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:23:17.668769    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:23:17.668834    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 10:23:17.688500    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:23:17.688567    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:23:17.707535    4318 provision.go:87] duration metric: took 221.030078ms to configureAuth
	I0917 10:23:17.707546    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:23:17.707708    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:17.707721    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:17.707852    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.707942    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.708031    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.708110    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.708196    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.708323    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.708452    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.708459    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:23:17.762984    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:23:17.762996    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:23:17.763071    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:23:17.763083    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.763221    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.763321    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.763414    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.763501    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.763654    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.763786    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.763831    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:23:17.831028    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:23:17.831050    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:17.831198    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:17.831285    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.831382    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:17.831474    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:17.831619    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:17.831766    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:17.831778    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:23:19.502053    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:23:19.502067    4318 machine.go:96] duration metric: took 14.516529187s to provisionDockerMachine
	I0917 10:23:19.502080    4318 start.go:293] postStartSetup for "ha-744000" (driver="hyperkit")
	I0917 10:23:19.502098    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:23:19.502109    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.502292    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:23:19.502308    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.502398    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.502495    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.502582    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.502683    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.538092    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:23:19.544386    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:23:19.544403    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:23:19.544498    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:23:19.544649    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:23:19.544655    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:23:19.544826    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:23:19.556994    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:19.591561    4318 start.go:296] duration metric: took 89.471125ms for postStartSetup
	I0917 10:23:19.591589    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.591778    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:23:19.591792    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.591890    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.591986    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.592094    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.592189    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.628129    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:23:19.628204    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:23:19.683042    4318 fix.go:56] duration metric: took 14.870917903s for fixHost
	I0917 10:23:19.683065    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.683198    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.683290    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.683390    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.683480    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.683627    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:19.683766    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:23:19.683773    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:23:19.738877    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593799.774557135
	
	I0917 10:23:19.738891    4318 fix.go:216] guest clock: 1726593799.774557135
	I0917 10:23:19.738896    4318 fix.go:229] Guest: 2024-09-17 10:23:19.774557135 -0700 PDT Remote: 2024-09-17 10:23:19.683055 -0700 PDT m=+15.339523666 (delta=91.502135ms)
	I0917 10:23:19.738917    4318 fix.go:200] guest clock delta is within tolerance: 91.502135ms
	I0917 10:23:19.738921    4318 start.go:83] releasing machines lock for "ha-744000", held for 14.926834615s
	I0917 10:23:19.738935    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739067    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:19.739167    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739471    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739568    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:19.739641    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:23:19.739673    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.739721    4318 ssh_runner.go:195] Run: cat /version.json
	I0917 10:23:19.739736    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:19.739766    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.739840    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.739856    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:19.739947    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.739962    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:19.740048    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.740062    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:19.740142    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:19.774171    4318 ssh_runner.go:195] Run: systemctl --version
	I0917 10:23:19.817235    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:23:19.822623    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:23:19.822678    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:23:19.837890    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:23:19.837904    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:19.838006    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:19.853023    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:23:19.862093    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:23:19.871068    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:23:19.871113    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:23:19.879912    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:19.888688    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:23:19.897529    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:19.906364    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:23:19.915519    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:23:19.924345    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:23:19.933204    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:23:19.942066    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:23:19.950115    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:23:19.958120    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:20.050394    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:23:20.067714    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:20.067803    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:23:20.081564    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:20.097350    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:23:20.111548    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:20.122410    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:20.132513    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:23:20.154104    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:20.164678    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:20.179449    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:23:20.182399    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:23:20.189403    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:23:20.202719    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:23:20.301120    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:23:20.410774    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:23:20.410853    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:23:20.425592    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:20.533399    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:23:22.845501    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31206782s)
	I0917 10:23:22.845569    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:23:22.857323    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:23:22.872057    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:22.882229    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:23:22.972546    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:23:23.076325    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.190977    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:23:23.204628    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:23.215649    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.315122    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:23:23.379549    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:23:23.379639    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:23:23.384126    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:23:23.384195    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:23:23.387269    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:23:23.412842    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:23:23.412931    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:23.429633    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:23.488622    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:23:23.488658    4318 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:23:23.488993    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:23:23.492752    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:23.502567    4318 kubeadm.go:883] updating cluster {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 10:23:23.502656    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:23.502726    4318 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:23:23.518379    4318 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:23:23.518391    4318 docker.go:615] Images already preloaded, skipping extraction
	I0917 10:23:23.518479    4318 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:23:23.534156    4318 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:23:23.534175    4318 cache_images.go:84] Images are preloaded, skipping loading
	I0917 10:23:23.534195    4318 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 10:23:23.534287    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:23:23.534379    4318 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:23:23.569331    4318 cni.go:84] Creating CNI manager for ""
	I0917 10:23:23.569343    4318 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 10:23:23.569361    4318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:23:23.569378    4318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-744000 NodeName:ha-744000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:23:23.569456    4318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-744000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:23:23.569470    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:23:23.569527    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:23:23.582869    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:23:23.582932    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:23:23.582986    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:23:23.591650    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:23:23.591706    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 10:23:23.600248    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 10:23:23.613597    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:23:23.626900    4318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 10:23:23.640890    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:23:23.654403    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:23:23.657129    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:23.666988    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:23.767317    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:23.779290    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.5
	I0917 10:23:23.779301    4318 certs.go:194] generating shared ca certs ...
	I0917 10:23:23.779311    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.779465    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:23:23.779530    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:23:23.779541    4318 certs.go:256] generating profile certs ...
	I0917 10:23:23.779629    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:23:23.779650    4318 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17
	I0917 10:23:23.779666    4318 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0917 10:23:23.841071    4318 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 ...
	I0917 10:23:23.841087    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17: {Name:mkab82f9fd921972a929c6516cc39a0a941fac49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.841637    4318 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17 ...
	I0917 10:23:23.841647    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17: {Name:mke24af4c0eaf07f776b7fe40f78c9c251937399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:23.841917    4318 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.d41f8f17 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt
	I0917 10:23:23.842125    4318 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.d41f8f17 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key
	I0917 10:23:23.842361    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:23:23.842370    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:23:23.842393    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:23:23.842415    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:23:23.842434    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:23:23.842453    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:23:23.842471    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:23:23.842488    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:23:23.842505    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:23:23.842587    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:23:23.842622    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:23:23.842630    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:23:23.842662    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:23:23.842691    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:23:23.842724    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:23:23.842794    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:23.842828    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:23:23.842858    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:23.842876    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:23:23.843373    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:23:23.870080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:23:23.894949    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:23:23.914532    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:23:23.943260    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:23:23.966311    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:23:23.996612    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:23:24.032495    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:23:24.071443    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:23:24.109203    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:23:24.145982    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:23:24.196620    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:23:24.212031    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:23:24.216442    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:23:24.225794    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.229210    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.229255    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:24.233534    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:23:24.242685    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:23:24.251758    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.255864    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.255908    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:23:24.260126    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:23:24.269138    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:23:24.278092    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.281460    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.281501    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:23:24.285770    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:23:24.294687    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:23:24.298152    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:23:24.302803    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:23:24.307168    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:23:24.311812    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:23:24.316345    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:23:24.320697    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:23:24.325019    4318 kubeadm.go:392] StartCluster: {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:23:24.325142    4318 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:23:24.337612    4318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:23:24.345939    4318 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:23:24.345951    4318 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:23:24.345995    4318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:23:24.354304    4318 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:23:24.354625    4318 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-744000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.354704    4318 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1558/kubeconfig needs updating (will repair): [kubeconfig missing "ha-744000" cluster setting kubeconfig missing "ha-744000" context setting]
	I0917 10:23:24.354943    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.355336    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.355573    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:23:24.355889    4318 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 10:23:24.356070    4318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:23:24.364125    4318 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 10:23:24.364137    4318 kubeadm.go:597] duration metric: took 18.181933ms to restartPrimaryControlPlane
	I0917 10:23:24.364142    4318 kubeadm.go:394] duration metric: took 39.129847ms to StartCluster
	I0917 10:23:24.364150    4318 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.364222    4318 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:24.364601    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:24.364822    4318 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:23:24.364835    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:23:24.364845    4318 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:23:24.365364    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:24.407801    4318 out.go:177] * Enabled addons: 
	I0917 10:23:24.449987    4318 addons.go:510] duration metric: took 84.961836ms for enable addons: enabled=[]
	I0917 10:23:24.450005    4318 start.go:246] waiting for cluster config update ...
	I0917 10:23:24.450011    4318 start.go:255] writing updated cluster config ...
	I0917 10:23:24.470905    4318 out.go:201] 
	I0917 10:23:24.492266    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:24.492406    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.514885    4318 out.go:177] * Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	I0917 10:23:24.556844    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:24.556881    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:24.557072    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:24.557091    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:24.557227    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.558233    4318 start.go:360] acquireMachinesLock for ha-744000-m02: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:24.558336    4318 start.go:364] duration metric: took 78.234µs to acquireMachinesLock for "ha-744000-m02"
	I0917 10:23:24.558362    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:24.558375    4318 fix.go:54] fixHost starting: m02
	I0917 10:23:24.558805    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:24.558841    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:24.567958    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51922
	I0917 10:23:24.568283    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:24.568655    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:24.568674    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:24.568935    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:24.569064    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:24.569164    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:23:24.569268    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.569346    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4278
	I0917 10:23:24.570356    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4278 missing from process table
	I0917 10:23:24.570389    4318 fix.go:112] recreateIfNeeded on ha-744000-m02: state=Stopped err=<nil>
	I0917 10:23:24.570398    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	W0917 10:23:24.570487    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:24.612951    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m02" ...
	I0917 10:23:24.633920    4318 main.go:141] libmachine: (ha-744000-m02) Calling .Start
	I0917 10:23:24.634199    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.634258    4318 main.go:141] libmachine: (ha-744000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid
	I0917 10:23:24.636176    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4278 missing from process table
	I0917 10:23:24.636188    4318 main.go:141] libmachine: (ha-744000-m02) DBG | pid 4278 is in state "Stopped"
	I0917 10:23:24.636209    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid...
	I0917 10:23:24.636621    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Using UUID 84417734-d0f3-4fed-a88c-11fa06a6299e
	I0917 10:23:24.663465    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Generated MAC 72:92:6:7e:7d:92
	I0917 10:23:24.663489    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:24.663621    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:24.663651    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:24.663689    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "84417734-d0f3-4fed-a88c-11fa06a6299e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:24.663725    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 84417734-d0f3-4fed-a88c-11fa06a6299e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:24.663736    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:24.665138    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 DEBUG: hyperkit: Pid is 4339
	I0917 10:23:24.665538    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Attempt 0
	I0917 10:23:24.665551    4318 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:24.665623    4318 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:23:24.667294    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Searching for 72:92:6:7e:7d:92 in /var/db/dhcpd_leases ...
	I0917 10:23:24.667331    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:24.667353    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:23:24.667370    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:24.667381    4318 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c3c}
	I0917 10:23:24.667387    4318 main.go:141] libmachine: (ha-744000-m02) DBG | Found match: 72:92:6:7e:7d:92
	I0917 10:23:24.667404    4318 main.go:141] libmachine: (ha-744000-m02) DBG | IP: 192.169.0.6
	I0917 10:23:24.667444    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetConfigRaw
	I0917 10:23:24.668104    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:24.668293    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:24.668710    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:24.668719    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:24.668846    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:24.668942    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:24.669029    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:24.669114    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:24.669205    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:24.669366    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:24.669585    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:24.669596    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:24.672842    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:24.682575    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:24.683443    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:24.683460    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:24.683476    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:24.683483    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:25.071063    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:25.071079    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:25.186245    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:25.186263    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:25.186274    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:25.186284    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:25.187156    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:25.187168    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:23:30.799209    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:23:30.799230    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:23:30.799236    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:23:30.822685    4318 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:23:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:23:33.867917    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0917 10:23:36.934481    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:23:36.934496    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:36.934638    4318 buildroot.go:166] provisioning hostname "ha-744000-m02"
	I0917 10:23:36.934649    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:36.934745    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:36.934837    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:36.934932    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:36.935015    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:36.935112    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:36.935288    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:36.935440    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:36.935451    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m02 && echo "ha-744000-m02" | sudo tee /etc/hostname
	I0917 10:23:37.008879    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m02
	
	I0917 10:23:37.008894    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.009061    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.009159    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.009242    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.009338    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.009486    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.009649    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.009660    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:23:37.078741    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:23:37.078758    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:23:37.078768    4318 buildroot.go:174] setting up certificates
	I0917 10:23:37.078774    4318 provision.go:84] configureAuth start
	I0917 10:23:37.078780    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:23:37.078916    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:37.079043    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.079131    4318 provision.go:143] copyHostCerts
	I0917 10:23:37.079159    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:37.079221    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:23:37.079228    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:23:37.079376    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:23:37.079595    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:37.079637    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:23:37.079642    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:23:37.079718    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:23:37.079893    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:37.079933    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:23:37.079938    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:23:37.080019    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:23:37.080160    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m02 san=[127.0.0.1 192.169.0.6 ha-744000-m02 localhost minikube]
	I0917 10:23:37.154648    4318 provision.go:177] copyRemoteCerts
	I0917 10:23:37.154702    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:23:37.154717    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.154843    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.154952    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.155045    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.155124    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:37.199228    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:23:37.199298    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:23:37.219018    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:23:37.219098    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:23:37.237862    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:23:37.237936    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:23:37.256979    4318 provision.go:87] duration metric: took 178.197064ms to configureAuth
	I0917 10:23:37.256993    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:23:37.257173    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:37.257186    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:37.257323    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.257405    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.257494    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.257572    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.257650    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.257770    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.257893    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.257901    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:23:37.319570    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:23:37.319583    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:23:37.319682    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:23:37.319696    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.319826    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.319938    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.320027    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.320108    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.320250    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.320387    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.320434    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:23:37.391815    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:23:37.391831    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:37.391975    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:37.392081    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.392159    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:37.392252    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:37.392374    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:37.392517    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:37.392529    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:23:39.075500    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:23:39.075515    4318 machine.go:96] duration metric: took 14.406707663s to provisionDockerMachine
	I0917 10:23:39.075523    4318 start.go:293] postStartSetup for "ha-744000-m02" (driver="hyperkit")
	I0917 10:23:39.075537    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:23:39.075547    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.075750    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:23:39.075764    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.075857    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.075952    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.076033    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.076151    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.119221    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:23:39.122818    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:23:39.122833    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:23:39.122960    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:23:39.123143    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:23:39.123150    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:23:39.123359    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:23:39.133517    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:39.159170    4318 start.go:296] duration metric: took 83.636865ms for postStartSetup
	I0917 10:23:39.159198    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.159385    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:23:39.159399    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.159480    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.159562    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.159664    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.159748    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.198408    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:23:39.198471    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:23:39.229469    4318 fix.go:56] duration metric: took 14.671003724s for fixHost
	I0917 10:23:39.229492    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.229627    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.229719    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.229810    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.229886    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.230020    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:39.230204    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:23:39.230212    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:23:39.293184    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593819.261870922
	
	I0917 10:23:39.293196    4318 fix.go:216] guest clock: 1726593819.261870922
	I0917 10:23:39.293204    4318 fix.go:229] Guest: 2024-09-17 10:23:39.261870922 -0700 PDT Remote: 2024-09-17 10:23:39.229481 -0700 PDT m=+34.885826601 (delta=32.389922ms)
	I0917 10:23:39.293215    4318 fix.go:200] guest clock delta is within tolerance: 32.389922ms
	I0917 10:23:39.293218    4318 start.go:83] releasing machines lock for "ha-744000-m02", held for 14.734778852s
	I0917 10:23:39.293233    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.293362    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:39.314064    4318 out.go:177] * Found network options:
	I0917 10:23:39.336076    4318 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 10:23:39.357954    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:23:39.357993    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.358861    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.359070    4318 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:23:39.359183    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:23:39.359227    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	W0917 10:23:39.359301    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:23:39.359362    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.359383    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:23:39.359396    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:23:39.359477    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.359514    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:23:39.359570    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.359617    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:23:39.359685    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:23:39.359724    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:23:39.359838    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:23:39.394282    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:23:39.394363    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:23:39.443373    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:23:39.443395    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:39.443489    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:39.459065    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:23:39.468374    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:23:39.477348    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:23:39.477400    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:23:39.486283    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:39.495295    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:23:39.504241    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:23:39.513081    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:23:39.522253    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:23:39.531218    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:23:39.540147    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:23:39.549122    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:23:39.557208    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:23:39.565185    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:39.663216    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:23:39.682558    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:23:39.682635    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:23:39.697642    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:39.710638    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:23:39.730208    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:23:39.740809    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:39.751126    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:23:39.776526    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:23:39.786854    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:23:39.801713    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:23:39.804604    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:23:39.811689    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:23:39.825130    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:23:39.919765    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:23:40.027561    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:23:40.027584    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:23:40.041479    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:40.155257    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:23:42.501803    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.346511037s)
	I0917 10:23:42.501877    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:23:42.512430    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:23:42.525247    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:42.535597    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:23:42.632719    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:23:42.733072    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:42.848472    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:23:42.862095    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:23:42.873097    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:42.974162    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:23:43.038704    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:23:43.038791    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:23:43.043279    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:23:43.043348    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:23:43.046420    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:23:43.072844    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:23:43.072933    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:43.089215    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:23:43.128559    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:23:43.170903    4318 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 10:23:43.192137    4318 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:23:43.192563    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:23:43.197213    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:43.206867    4318 mustload.go:65] Loading cluster: ha-744000
	I0917 10:23:43.207054    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:43.207326    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:43.207347    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:43.216115    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51945
	I0917 10:23:43.216443    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:43.216788    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:43.216802    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:43.217026    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:43.217137    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:23:43.217215    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:43.217301    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:23:43.218337    4318 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:23:43.218598    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:43.218625    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:43.227260    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51947
	I0917 10:23:43.227601    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:43.227937    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:43.227951    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:43.228147    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:43.228251    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:23:43.228345    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.6
	I0917 10:23:43.228352    4318 certs.go:194] generating shared ca certs ...
	I0917 10:23:43.228362    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:23:43.228527    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:23:43.228599    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:23:43.228607    4318 certs.go:256] generating profile certs ...
	I0917 10:23:43.228718    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:23:43.228804    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.026a9cc7
	I0917 10:23:43.228872    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:23:43.228880    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:23:43.228899    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:23:43.228920    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:23:43.228937    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:23:43.228954    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:23:43.228981    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:23:43.229010    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:23:43.229028    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:23:43.229119    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:23:43.229166    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:23:43.229175    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:23:43.229206    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:23:43.229242    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:23:43.229274    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:23:43.229342    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:23:43.229373    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.229393    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.229410    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.229434    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:23:43.229530    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:23:43.229617    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:23:43.229683    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:23:43.229765    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:23:43.256849    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 10:23:43.260879    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 10:23:43.269481    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 10:23:43.272632    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 10:23:43.280513    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 10:23:43.283582    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 10:23:43.291364    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 10:23:43.294480    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 10:23:43.302789    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 10:23:43.305925    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 10:23:43.313934    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 10:23:43.316968    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 10:23:43.325080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:23:43.345191    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:23:43.364654    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:23:43.384379    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:23:43.404164    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:23:43.424264    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:23:43.444115    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:23:43.463631    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:23:43.483492    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:23:43.502975    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:23:43.522485    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:23:43.543691    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 10:23:43.558295    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 10:23:43.571956    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 10:23:43.585450    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 10:23:43.598936    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 10:23:43.612569    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 10:23:43.626000    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 10:23:43.639468    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:23:43.643552    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:23:43.652183    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.655515    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.655555    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:23:43.659696    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:23:43.668232    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:23:43.676488    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.679940    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.679985    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:23:43.684222    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:23:43.692551    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:23:43.700894    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.704479    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.704526    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:23:43.708650    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:23:43.716969    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:23:43.720371    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:23:43.724736    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:23:43.728968    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:23:43.733213    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:23:43.737400    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:23:43.741597    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:23:43.745820    4318 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0917 10:23:43.745877    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:23:43.745890    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:23:43.745926    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:23:43.758434    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:23:43.758473    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:23:43.758527    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:23:43.766283    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:23:43.766331    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 10:23:43.773641    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 10:23:43.786920    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:23:43.800443    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:23:43.813790    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:23:43.816730    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:23:43.826099    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:43.934702    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:43.949825    4318 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:23:43.950025    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:43.971583    4318 out.go:177] * Verifying Kubernetes components...
	I0917 10:23:44.013350    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:23:44.148955    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:23:44.167233    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:23:44.167427    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 10:23:44.167473    4318 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 10:23:44.167643    4318 node_ready.go:35] waiting up to 6m0s for node "ha-744000-m02" to be "Ready" ...
	I0917 10:23:44.167726    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:44.167731    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:44.167739    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:44.167743    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.307737    4318 round_trippers.go:574] Response Status: 200 OK in 8139 milliseconds
	I0917 10:23:52.308306    4318 node_ready.go:49] node "ha-744000-m02" has status "Ready":"True"
	I0917 10:23:52.308317    4318 node_ready.go:38] duration metric: took 8.140607385s for node "ha-744000-m02" to be "Ready" ...
	I0917 10:23:52.308324    4318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:23:52.308363    4318 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 10:23:52.308373    4318 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 10:23:52.308426    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:52.308431    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.308441    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.308444    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.320722    4318 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0917 10:23:52.327343    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.327408    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j9jcc
	I0917 10:23:52.327415    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.327421    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.327424    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.333529    4318 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 10:23:52.334030    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.334039    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.334045    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.334048    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.338396    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:52.338672    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.338681    4318 pod_ready.go:82] duration metric: took 11.322168ms for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.338688    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.338729    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-khnlh
	I0917 10:23:52.338734    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.338739    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.338744    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.344023    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:23:52.344589    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.344597    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.344602    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.344606    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.349539    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:52.349983    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.349992    4318 pod_ready.go:82] duration metric: took 11.298293ms for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.349999    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.350040    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000
	I0917 10:23:52.350045    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.350051    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.350055    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.357637    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:23:52.358005    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.358013    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.358019    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.358027    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.365136    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:23:52.365716    4318 pod_ready.go:93] pod "etcd-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.365726    4318 pod_ready.go:82] duration metric: took 15.722025ms for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.365733    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.365780    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m02
	I0917 10:23:52.365789    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.365795    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.365799    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.369072    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.369567    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:52.369575    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.369581    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.369584    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.373049    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.373553    4318 pod_ready.go:93] pod "etcd-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.373563    4318 pod_ready.go:82] duration metric: took 7.825215ms for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.373570    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.373616    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:23:52.373621    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.373626    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.373631    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.376282    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.509242    4318 request.go:632] Waited for 132.500318ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:52.509283    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:52.509290    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.509317    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.509323    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.513207    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:52.513696    4318 pod_ready.go:93] pod "etcd-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.513705    4318 pod_ready.go:82] duration metric: took 140.128679ms for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.513724    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.709621    4318 request.go:632] Waited for 195.859717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:23:52.709653    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:23:52.709657    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.709664    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.709669    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.711912    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.908496    4318 request.go:632] Waited for 196.021957ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.908552    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:52.908558    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:52.908563    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:52.908566    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:52.911337    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:52.911774    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:52.911783    4318 pod_ready.go:82] duration metric: took 398.052058ms for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:52.911790    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.108964    4318 request.go:632] Waited for 197.132834ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:23:53.109014    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:23:53.109019    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.109025    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.109029    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.112077    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:53.308769    4318 request.go:632] Waited for 196.065261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:53.308824    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:53.308830    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.308836    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.308840    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.313525    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:53.313816    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:53.313826    4318 pod_ready.go:82] duration metric: took 402.029202ms for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.313836    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.509951    4318 request.go:632] Waited for 196.074667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:23:53.509985    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:23:53.509990    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.510035    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.510042    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.514822    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:53.709150    4318 request.go:632] Waited for 193.647696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:53.709201    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:53.709210    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.709254    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.709264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.712954    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:53.713373    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:53.713382    4318 pod_ready.go:82] duration metric: took 399.538201ms for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.713389    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:53.908806    4318 request.go:632] Waited for 195.370205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:23:53.908887    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:23:53.908897    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:53.908909    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:53.908917    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:53.911967    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.108997    4318 request.go:632] Waited for 196.429766ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:54.109063    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:54.109070    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.109082    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.109089    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.112475    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.114386    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.114395    4318 pod_ready.go:82] duration metric: took 400.998189ms for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.114402    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.308794    4318 request.go:632] Waited for 194.35354ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:23:54.308838    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:23:54.308874    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.308882    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.308915    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.311225    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:54.508611    4318 request.go:632] Waited for 197.017438ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:54.508643    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:54.508648    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.508654    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.508658    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.513358    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:54.514643    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.514653    4318 pod_ready.go:82] duration metric: took 400.244458ms for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.514660    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.709389    4318 request.go:632] Waited for 194.662221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:23:54.709498    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:23:54.709508    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.709517    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.709522    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.712945    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.908904    4318 request.go:632] Waited for 195.122532ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:54.908956    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:54.908964    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:54.908976    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:54.908984    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:54.912489    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:54.912833    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:54.912844    4318 pod_ready.go:82] duration metric: took 398.175427ms for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:54.912853    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.109718    4318 request.go:632] Waited for 196.795087ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:23:55.109851    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:23:55.109863    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.109874    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.109880    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.113014    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:55.310231    4318 request.go:632] Waited for 196.716951ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:23:55.310297    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:23:55.310304    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.310310    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.310327    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.312467    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.312877    4318 pod_ready.go:93] pod "kube-proxy-66bkb" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:55.312887    4318 pod_ready.go:82] duration metric: took 400.026129ms for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.312894    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.508659    4318 request.go:632] Waited for 195.71304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:23:55.508705    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:23:55.508714    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.508762    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.508776    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.511406    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.709478    4318 request.go:632] Waited for 197.620419ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:55.709553    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:55.709561    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.709569    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.709573    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.712068    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:55.712400    4318 pod_ready.go:93] pod "kube-proxy-6xd2h" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:55.712409    4318 pod_ready.go:82] duration metric: took 399.507321ms for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.712415    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:55.908839    4318 request.go:632] Waited for 196.378567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:23:55.908879    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:23:55.908886    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:55.908894    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:55.908903    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:55.911317    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.108670    4318 request.go:632] Waited for 196.90743ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:56.108733    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:56.108741    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.108750    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.108755    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.111013    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.111432    4318 pod_ready.go:93] pod "kube-proxy-c5xbc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.111441    4318 pod_ready.go:82] duration metric: took 399.01941ms for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.111448    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.309131    4318 request.go:632] Waited for 197.638325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:23:56.309195    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:23:56.309203    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.309211    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.309218    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.311722    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.510036    4318 request.go:632] Waited for 197.949522ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:56.510102    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:56.510108    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.510114    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.510116    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.514224    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:56.514571    4318 pod_ready.go:93] pod "kube-proxy-k9xsp" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.514581    4318 pod_ready.go:82] duration metric: took 403.125717ms for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.514588    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.708850    4318 request.go:632] Waited for 194.175339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:23:56.708991    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:23:56.709003    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.709014    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.709019    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.712753    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:56.909408    4318 request.go:632] Waited for 196.094397ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:56.909453    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:23:56.909458    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:56.909464    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:56.909469    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:56.911617    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:56.911990    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:56.911998    4318 pod_ready.go:82] duration metric: took 397.403001ms for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:56.912004    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.108563    4318 request.go:632] Waited for 196.516714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:23:57.108623    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:23:57.108651    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.108657    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.108661    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.111145    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:57.310537    4318 request.go:632] Waited for 198.433255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:57.310658    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:23:57.310670    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.310681    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.310688    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.313850    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:57.314399    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:57.314411    4318 pod_ready.go:82] duration metric: took 402.398279ms for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.314420    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.508583    4318 request.go:632] Waited for 194.120837ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:23:57.508650    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:23:57.508656    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.508662    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.508667    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.510939    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:57.709335    4318 request.go:632] Waited for 198.006371ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:57.709452    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:23:57.709463    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.709475    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.709482    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.712690    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:23:57.713150    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:23:57.713163    4318 pod_ready.go:82] duration metric: took 398.73468ms for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:23:57.713172    4318 pod_ready.go:39] duration metric: took 5.404804093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:23:57.713193    4318 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:23:57.713279    4318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:23:57.724647    4318 api_server.go:72] duration metric: took 13.774712051s to wait for apiserver process to appear ...
	I0917 10:23:57.724659    4318 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:23:57.724675    4318 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 10:23:57.728863    4318 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 10:23:57.728906    4318 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 10:23:57.728911    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.728929    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.728935    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.729498    4318 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 10:23:57.729550    4318 api_server.go:141] control plane version: v1.31.1
	I0917 10:23:57.729558    4318 api_server.go:131] duration metric: took 4.895474ms to wait for apiserver health ...
	I0917 10:23:57.729563    4318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 10:23:57.909401    4318 request.go:632] Waited for 179.781674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:57.909604    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:57.909621    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:57.909636    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:57.909648    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:57.914890    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:23:57.920746    4318 system_pods.go:59] 26 kube-system pods found
	I0917 10:23:57.920767    4318 system_pods.go:61] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running
	I0917 10:23:57.920771    4318 system_pods.go:61] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running
	I0917 10:23:57.920774    4318 system_pods.go:61] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:23:57.920780    4318 system_pods.go:61] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 10:23:57.920785    4318 system_pods.go:61] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:23:57.920789    4318 system_pods.go:61] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:23:57.920791    4318 system_pods.go:61] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:23:57.920796    4318 system_pods.go:61] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 10:23:57.920802    4318 system_pods.go:61] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:23:57.920805    4318 system_pods.go:61] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:23:57.920808    4318 system_pods.go:61] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 10:23:57.920811    4318 system_pods.go:61] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:23:57.920815    4318 system_pods.go:61] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:23:57.920819    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 10:23:57.920824    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:23:57.920827    4318 system_pods.go:61] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:23:57.920829    4318 system_pods.go:61] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:23:57.920832    4318 system_pods.go:61] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:23:57.920836    4318 system_pods.go:61] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 10:23:57.920839    4318 system_pods.go:61] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:23:57.920844    4318 system_pods.go:61] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 10:23:57.920848    4318 system_pods.go:61] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:23:57.920851    4318 system_pods.go:61] "kube-vip-ha-744000" [4613d53e-c3b7-48eb-bb87-057beab671e7] Running
	I0917 10:23:57.920858    4318 system_pods.go:61] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:23:57.920862    4318 system_pods.go:61] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:23:57.920864    4318 system_pods.go:61] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:23:57.920868    4318 system_pods.go:74] duration metric: took 191.300068ms to wait for pod list to return data ...
	I0917 10:23:57.920876    4318 default_sa.go:34] waiting for default service account to be created ...
	I0917 10:23:58.108816    4318 request.go:632] Waited for 187.888047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:23:58.108877    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:23:58.108885    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.108893    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.108898    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.111818    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:23:58.111952    4318 default_sa.go:45] found service account: "default"
	I0917 10:23:58.111961    4318 default_sa.go:55] duration metric: took 191.079569ms for default service account to be created ...
	I0917 10:23:58.111967    4318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 10:23:58.309003    4318 request.go:632] Waited for 196.929892ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:58.309102    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:23:58.309111    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.309136    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.309143    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.314149    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:58.319524    4318 system_pods.go:86] 26 kube-system pods found
	I0917 10:23:58.319535    4318 system_pods.go:89] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running
	I0917 10:23:58.319541    4318 system_pods.go:89] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running
	I0917 10:23:58.319544    4318 system_pods.go:89] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:23:58.319549    4318 system_pods.go:89] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 10:23:58.319554    4318 system_pods.go:89] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:23:58.319557    4318 system_pods.go:89] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:23:58.319567    4318 system_pods.go:89] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:23:58.319571    4318 system_pods.go:89] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 10:23:58.319580    4318 system_pods.go:89] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:23:58.319584    4318 system_pods.go:89] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:23:58.319588    4318 system_pods.go:89] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 10:23:58.319591    4318 system_pods.go:89] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:23:58.319595    4318 system_pods.go:89] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:23:58.319599    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 10:23:58.319602    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:23:58.319612    4318 system_pods.go:89] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:23:58.319616    4318 system_pods.go:89] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:23:58.319618    4318 system_pods.go:89] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:23:58.319622    4318 system_pods.go:89] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 10:23:58.319628    4318 system_pods.go:89] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:23:58.319632    4318 system_pods.go:89] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 10:23:58.319635    4318 system_pods.go:89] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:23:58.319639    4318 system_pods.go:89] "kube-vip-ha-744000" [4613d53e-c3b7-48eb-bb87-057beab671e7] Running
	I0917 10:23:58.319642    4318 system_pods.go:89] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:23:58.319644    4318 system_pods.go:89] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:23:58.319647    4318 system_pods.go:89] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:23:58.319651    4318 system_pods.go:126] duration metric: took 207.678997ms to wait for k8s-apps to be running ...
	I0917 10:23:58.319662    4318 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 10:23:58.319720    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:23:58.331325    4318 system_svc.go:56] duration metric: took 11.65971ms WaitForService to wait for kubelet
	I0917 10:23:58.331338    4318 kubeadm.go:582] duration metric: took 14.381399967s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:23:58.331366    4318 node_conditions.go:102] verifying NodePressure condition ...
	I0917 10:23:58.509807    4318 request.go:632] Waited for 178.384911ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 10:23:58.509886    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 10:23:58.509895    4318 round_trippers.go:469] Request Headers:
	I0917 10:23:58.509908    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:23:58.509913    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:23:58.514102    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:23:58.514949    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514961    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514970    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514973    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514976    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514979    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.514982    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:23:58.514995    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:23:58.515002    4318 node_conditions.go:105] duration metric: took 183.62967ms to run NodePressure ...
	I0917 10:23:58.515010    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:23:58.515030    4318 start.go:255] writing updated cluster config ...
	I0917 10:23:58.535539    4318 out.go:201] 
	I0917 10:23:58.573360    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:23:58.573455    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.595258    4318 out.go:177] * Starting "ha-744000-m03" control-plane node in "ha-744000" cluster
	I0917 10:23:58.653092    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:23:58.653125    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:23:58.653337    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:23:58.653370    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:23:58.653501    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.654346    4318 start.go:360] acquireMachinesLock for ha-744000-m03: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:23:58.654469    4318 start.go:364] duration metric: took 97.666µs to acquireMachinesLock for "ha-744000-m03"
	I0917 10:23:58.654496    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:23:58.654503    4318 fix.go:54] fixHost starting: m03
	I0917 10:23:58.655039    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:23:58.655076    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:23:58.665444    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0917 10:23:58.665867    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:23:58.666300    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:23:58.666321    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:23:58.666529    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:23:58.666645    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:23:58.666734    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetState
	I0917 10:23:58.666815    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.666929    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid from json: 3837
	I0917 10:23:58.667977    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid 3837 missing from process table
	I0917 10:23:58.668019    4318 fix.go:112] recreateIfNeeded on ha-744000-m03: state=Stopped err=<nil>
	I0917 10:23:58.668029    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	W0917 10:23:58.668111    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:23:58.707286    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m03" ...
	I0917 10:23:58.781042    4318 main.go:141] libmachine: (ha-744000-m03) Calling .Start
	I0917 10:23:58.781398    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.781451    4318 main.go:141] libmachine: (ha-744000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid
	I0917 10:23:58.783354    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid 3837 missing from process table
	I0917 10:23:58.783371    4318 main.go:141] libmachine: (ha-744000-m03) DBG | pid 3837 is in state "Stopped"
	I0917 10:23:58.783401    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid...
	I0917 10:23:58.783560    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Using UUID 2629e9cb-d7e0-4a36-a6bd-c4320ca3711f
	I0917 10:23:58.808610    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Generated MAC 5a:8d:be:33:c3:18
	I0917 10:23:58.808632    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:23:58.808748    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004040c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:58.808788    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004040c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:23:58.808853    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2629e9cb-d7e0-4a36-a6bd-c4320ca3711f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/ha-744000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:23:58.808899    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2629e9cb-d7e0-4a36-a6bd-c4320ca3711f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/ha-744000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:23:58.808915    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:23:58.810278    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 DEBUG: hyperkit: Pid is 4346
	I0917 10:23:58.810623    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Attempt 0
	I0917 10:23:58.810633    4318 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:23:58.810707    4318 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid from json: 4346
	I0917 10:23:58.812422    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Searching for 5a:8d:be:33:c3:18 in /var/db/dhcpd_leases ...
	I0917 10:23:58.812491    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:23:58.812547    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:23:58.812578    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:23:58.812610    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:23:58.812627    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetConfigRaw
	I0917 10:23:58.812629    4318 main.go:141] libmachine: (ha-744000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0ba8}
	I0917 10:23:58.812645    4318 main.go:141] libmachine: (ha-744000-m03) DBG | Found match: 5a:8d:be:33:c3:18
	I0917 10:23:58.812659    4318 main.go:141] libmachine: (ha-744000-m03) DBG | IP: 192.169.0.7
	I0917 10:23:58.813322    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:23:58.813511    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:23:58.814083    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:23:58.814095    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:23:58.814255    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:23:58.814354    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:23:58.814443    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:23:58.814551    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:23:58.814660    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:23:58.814840    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:23:58.815013    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:23:58.815022    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:23:58.818431    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:23:58.826878    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:23:58.827963    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:58.827996    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:58.828016    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:58.828056    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:59.216264    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:23:59.216286    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:23:59.331075    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:23:59.331093    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:23:59.331106    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:23:59.331113    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:23:59.331943    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:23:59.331953    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:23:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:24:04.953344    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:24:04.953400    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:24:04.953409    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:24:04.976712    4318 main.go:141] libmachine: (ha-744000-m03) DBG | 2024/09/17 10:24:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:24:08.843565    4318 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.7:22: connect: connection refused
	I0917 10:24:11.901419    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:24:11.901434    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:11.901561    4318 buildroot.go:166] provisioning hostname "ha-744000-m03"
	I0917 10:24:11.901572    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:11.901663    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:11.901749    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:11.901841    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.901928    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.902023    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:11.902156    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:11.902302    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:11.902310    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m03 && echo "ha-744000-m03" | sudo tee /etc/hostname
	I0917 10:24:11.969021    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m03
	
	I0917 10:24:11.969036    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:11.969172    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:11.969284    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.969390    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:11.969484    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:11.969628    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:11.969778    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:11.969789    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:24:12.032993    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:24:12.033009    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:24:12.033021    4318 buildroot.go:174] setting up certificates
	I0917 10:24:12.033027    4318 provision.go:84] configureAuth start
	I0917 10:24:12.033034    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetMachineName
	I0917 10:24:12.033164    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:12.033268    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.033363    4318 provision.go:143] copyHostCerts
	I0917 10:24:12.033396    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:24:12.033443    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:24:12.033450    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:24:12.033597    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:24:12.033799    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:24:12.033838    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:24:12.033843    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:24:12.033926    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:24:12.034067    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:24:12.034095    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:24:12.034100    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:24:12.034194    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:24:12.034361    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m03 san=[127.0.0.1 192.169.0.7 ha-744000-m03 localhost minikube]
	I0917 10:24:12.149328    4318 provision.go:177] copyRemoteCerts
	I0917 10:24:12.149388    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:24:12.149403    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.149590    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.149685    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.149761    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.149846    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:12.184712    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:24:12.184807    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:24:12.204199    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:24:12.204267    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 10:24:12.223758    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:24:12.223831    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:24:12.243169    4318 provision.go:87] duration metric: took 210.132957ms to configureAuth
	I0917 10:24:12.243183    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:24:12.243371    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:12.243385    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:12.243518    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.243598    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.243687    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.243761    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.243855    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.243970    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.244103    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.244110    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:24:12.301530    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:24:12.301541    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:24:12.301620    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:24:12.301632    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.301763    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.301869    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.301966    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.302040    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.302167    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.302303    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.302348    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:24:12.370095    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:24:12.370113    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:12.370241    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:12.370333    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.370424    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:12.370523    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:12.370657    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:12.370794    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:12.370805    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:24:14.004628    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:24:14.004644    4318 machine.go:96] duration metric: took 15.190455794s to provisionDockerMachine
	I0917 10:24:14.004650    4318 start.go:293] postStartSetup for "ha-744000-m03" (driver="hyperkit")
	I0917 10:24:14.004657    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:24:14.004672    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.004878    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:24:14.004901    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.005017    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.005138    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.005237    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.005322    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.044460    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:24:14.048554    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:24:14.048568    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:24:14.048680    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:24:14.048820    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:24:14.048826    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:24:14.048988    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:24:14.057354    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:24:14.088743    4318 start.go:296] duration metric: took 84.082897ms for postStartSetup
	I0917 10:24:14.088765    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.088958    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:24:14.088972    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.089062    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.089149    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.089239    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.089326    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.124314    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:24:14.124387    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:24:14.177086    4318 fix.go:56] duration metric: took 15.522482042s for fixHost
	I0917 10:24:14.177117    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.177268    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.177375    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.177470    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.177560    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.177699    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:14.177847    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 10:24:14.177855    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:24:14.235217    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593854.127008624
	
	I0917 10:24:14.235235    4318 fix.go:216] guest clock: 1726593854.127008624
	I0917 10:24:14.235240    4318 fix.go:229] Guest: 2024-09-17 10:24:14.127008624 -0700 PDT Remote: 2024-09-17 10:24:14.177103 -0700 PDT m=+69.833227660 (delta=-50.094376ms)
	I0917 10:24:14.235251    4318 fix.go:200] guest clock delta is within tolerance: -50.094376ms
	I0917 10:24:14.235255    4318 start.go:83] releasing machines lock for "ha-744000-m03", held for 15.580676894s
	I0917 10:24:14.235272    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.235402    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:14.257745    4318 out.go:177] * Found network options:
	I0917 10:24:14.279018    4318 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0917 10:24:14.300830    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:24:14.300855    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:24:14.300870    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301356    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301486    4318 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:24:14.301594    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:24:14.301623    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	W0917 10:24:14.301663    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:24:14.301685    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:24:14.301770    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:24:14.301785    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:24:14.301824    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.301934    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:24:14.301945    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.302070    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.302137    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:24:14.302238    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:24:14.302321    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:24:14.302438    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	W0917 10:24:14.334246    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:24:14.334313    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:24:14.380907    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:24:14.380924    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:24:14.381008    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:24:14.397032    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:24:14.406169    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:24:14.415306    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:24:14.415369    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:24:14.424550    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:24:14.435946    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:24:14.448076    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:24:14.457027    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:24:14.466527    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:24:14.475918    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:24:14.484801    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:24:14.494039    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:24:14.502344    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:24:14.510724    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:14.608373    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:24:14.627463    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:24:14.627552    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:24:14.644673    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:24:14.657243    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:24:14.675019    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:24:14.686098    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:24:14.697382    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:24:14.722583    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:24:14.734058    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:24:14.749179    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:24:14.752033    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:24:14.760199    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:24:14.773743    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:24:14.866897    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:24:14.972459    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:24:14.972482    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:24:14.986205    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:15.081962    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:24:17.363023    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.281026419s)
	I0917 10:24:17.363099    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:24:17.373222    4318 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:24:17.386396    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:24:17.397093    4318 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:24:17.488832    4318 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:24:17.603916    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:17.712002    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:24:17.725875    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:24:17.737346    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:17.846138    4318 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:24:17.910308    4318 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:24:17.910400    4318 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:24:17.914917    4318 start.go:563] Will wait 60s for crictl version
	I0917 10:24:17.914984    4318 ssh_runner.go:195] Run: which crictl
	I0917 10:24:17.918153    4318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:24:17.947145    4318 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:24:17.947245    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:24:17.963719    4318 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:24:18.000615    4318 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:24:18.042227    4318 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 10:24:18.063289    4318 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 10:24:18.084167    4318 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:24:18.084404    4318 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:24:18.087640    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:24:18.098050    4318 mustload.go:65] Loading cluster: ha-744000
	I0917 10:24:18.098230    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:18.098462    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:18.098484    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:18.107325    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51975
	I0917 10:24:18.107666    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:18.108009    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:18.108026    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:18.108255    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:18.108371    4318 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:24:18.108467    4318 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:18.108528    4318 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:24:18.109600    4318 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:24:18.109898    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:18.109929    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:18.118725    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51977
	I0917 10:24:18.119073    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:18.119409    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:18.119421    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:18.119635    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:18.119739    4318 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:24:18.119820    4318 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.7
	I0917 10:24:18.119829    4318 certs.go:194] generating shared ca certs ...
	I0917 10:24:18.119841    4318 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:24:18.119995    4318 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:24:18.120047    4318 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:24:18.120060    4318 certs.go:256] generating profile certs ...
	I0917 10:24:18.120159    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:24:18.120243    4318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.2fbb59ab
	I0917 10:24:18.120301    4318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:24:18.120308    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:24:18.120350    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:24:18.120376    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:24:18.120395    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:24:18.120412    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:24:18.120438    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:24:18.120458    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:24:18.120476    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:24:18.120563    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:24:18.120603    4318 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:24:18.120612    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:24:18.120645    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:24:18.120678    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:24:18.120708    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:24:18.120780    4318 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:24:18.120814    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.120834    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.120851    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.120877    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:24:18.120957    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:24:18.121043    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:24:18.121130    4318 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:24:18.121202    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:24:18.147236    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 10:24:18.150493    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 10:24:18.158955    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 10:24:18.162129    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 10:24:18.169902    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 10:24:18.173023    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 10:24:18.181042    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 10:24:18.184431    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 10:24:18.192679    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 10:24:18.195793    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 10:24:18.203953    4318 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 10:24:18.207044    4318 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 10:24:18.215067    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:24:18.235596    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:24:18.255384    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:24:18.274936    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:24:18.294598    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 10:24:18.314207    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:24:18.333653    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:24:18.352964    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:24:18.372887    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:24:18.392444    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:24:18.412080    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:24:18.431948    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 10:24:18.445500    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 10:24:18.459362    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 10:24:18.473399    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 10:24:18.487272    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 10:24:18.501703    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 10:24:18.515561    4318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 10:24:18.529533    4318 ssh_runner.go:195] Run: openssl version
	I0917 10:24:18.533858    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:24:18.543223    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.546597    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.546657    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:24:18.550937    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:24:18.560220    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:24:18.569425    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.572837    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.572891    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:24:18.577272    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:24:18.586607    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:24:18.596344    4318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.600052    4318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.600113    4318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:24:18.604520    4318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:24:18.614023    4318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:24:18.617509    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:24:18.621851    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:24:18.626160    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:24:18.630354    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:24:18.634589    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:24:18.638973    4318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:24:18.643298    4318 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.1 docker true true} ...
	I0917 10:24:18.643362    4318 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:24:18.643382    4318 kube-vip.go:115] generating kube-vip config ...
	I0917 10:24:18.643427    4318 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:24:18.656418    4318 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:24:18.656455    4318 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:24:18.656516    4318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:24:18.665097    4318 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:24:18.665163    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 10:24:18.673393    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 10:24:18.687079    4318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:24:18.701092    4318 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:24:18.714815    4318 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:24:18.717763    4318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:24:18.727902    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:18.829461    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:24:18.842084    4318 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:24:18.842275    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:18.863032    4318 out.go:177] * Verifying Kubernetes components...
	I0917 10:24:18.883865    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:24:18.998710    4318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:24:19.010018    4318 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:24:19.010220    4318 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11f2e720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 10:24:19.010257    4318 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 10:24:19.010447    4318 node_ready.go:35] waiting up to 6m0s for node "ha-744000-m03" to be "Ready" ...
	I0917 10:24:19.010490    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.010495    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.010502    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.010506    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.012607    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.012878    4318 node_ready.go:49] node "ha-744000-m03" has status "Ready":"True"
	I0917 10:24:19.012890    4318 node_ready.go:38] duration metric: took 2.431907ms for node "ha-744000-m03" to be "Ready" ...
	I0917 10:24:19.012896    4318 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:24:19.012942    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:19.012948    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.012953    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.012957    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.016637    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:19.021780    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.021832    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j9jcc
	I0917 10:24:19.021838    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.021845    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.021849    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.023987    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.024523    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.024531    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.024537    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.024540    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.026255    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.026592    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.026602    4318 pod_ready.go:82] duration metric: took 4.810235ms for pod "coredns-7c65d6cfc9-j9jcc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.026609    4318 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.026651    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-khnlh
	I0917 10:24:19.026656    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.026661    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.026665    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.028592    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.029028    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.029035    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.029041    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.029046    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.031043    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.031318    4318 pod_ready.go:93] pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.031326    4318 pod_ready.go:82] duration metric: took 4.71115ms for pod "coredns-7c65d6cfc9-khnlh" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.031340    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.031385    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000
	I0917 10:24:19.031390    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.031395    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.031400    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.033205    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.033583    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.033590    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.033596    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.033600    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.035534    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.035980    4318 pod_ready.go:93] pod "etcd-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.035990    4318 pod_ready.go:82] duration metric: took 4.645198ms for pod "etcd-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.035996    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.036034    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m02
	I0917 10:24:19.036039    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.036044    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.036047    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.038093    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:19.038513    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:19.038520    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.038526    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.038529    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.040485    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:19.041086    4318 pod_ready.go:93] pod "etcd-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.041096    4318 pod_ready.go:82] duration metric: took 5.095487ms for pod "etcd-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.041103    4318 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.210917    4318 request.go:632] Waited for 169.774559ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:24:19.210994    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-744000-m03
	I0917 10:24:19.211005    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.211012    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.211017    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.219188    4318 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 10:24:19.410612    4318 request.go:632] Waited for 190.84697ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.410658    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:19.410668    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.410679    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.410688    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.427654    4318 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0917 10:24:19.428047    4318 pod_ready.go:93] pod "etcd-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.428057    4318 pod_ready.go:82] duration metric: took 386.946972ms for pod "etcd-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.428069    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.611188    4318 request.go:632] Waited for 183.076824ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:24:19.611240    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000
	I0917 10:24:19.611249    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.611257    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.611264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.622189    4318 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0917 10:24:19.811318    4318 request.go:632] Waited for 187.797206ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.811366    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:19.811407    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:19.811419    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:19.811426    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:19.823164    4318 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0917 10:24:19.823509    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:19.823520    4318 pod_ready.go:82] duration metric: took 395.442485ms for pod "kube-apiserver-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:19.823528    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.010832    4318 request.go:632] Waited for 187.259959ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:24:20.010872    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m02
	I0917 10:24:20.010876    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.010913    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.010919    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.016809    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:20.210576    4318 request.go:632] Waited for 193.290597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:20.210656    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:20.210663    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.210675    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.210681    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.241143    4318 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0917 10:24:20.242017    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:20.242029    4318 pod_ready.go:82] duration metric: took 418.492753ms for pod "kube-apiserver-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.242037    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:20.412058    4318 request.go:632] Waited for 169.980212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.412108    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.412115    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.412119    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.412124    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.426145    4318 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0917 10:24:20.611816    4318 request.go:632] Waited for 184.70602ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:20.611860    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:20.611919    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.611928    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.611934    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.620369    4318 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 10:24:20.811031    4318 request.go:632] Waited for 68.064136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.811067    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:20.811073    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:20.811120    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:20.811130    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:20.814429    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:21.010914    4318 request.go:632] Waited for 195.866244ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.010969    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.010976    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.010982    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.010986    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.013773    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.243275    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:21.243312    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.243339    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.243347    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.246247    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.411834    4318 request.go:632] Waited for 165.11515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.411870    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.411880    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.411906    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.411911    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.414456    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:21.742665    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:21.742680    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.742687    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.742691    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.745790    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:21.812507    4318 request.go:632] Waited for 66.156229ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.812582    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:21.812590    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:21.812600    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:21.812608    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:21.820287    4318 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 10:24:22.242306    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:22.242320    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.242327    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.242331    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.244398    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.244874    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:22.244882    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.244888    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.244892    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.246990    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.247323    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:22.742294    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:22.742306    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.742313    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.742316    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.744814    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:22.745729    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:22.745740    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:22.745748    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:22.745751    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:22.748226    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.242342    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:23.242353    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.242359    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.242363    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.244374    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.244841    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:23.244851    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.244856    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.244861    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.246650    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:23.742870    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:23.742914    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.742924    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.742931    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.745627    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:23.746052    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:23.746060    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:23.746065    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:23.746068    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:23.747609    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.242218    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:24.242231    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.242238    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.242242    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.244278    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:24.244830    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:24.244840    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.244846    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.244849    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.246617    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.743710    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:24.743732    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.743767    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.743774    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.746703    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:24.747074    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:24.747081    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:24.747086    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:24.747091    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:24.748857    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:24.749268    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:25.243132    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:25.243162    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.243175    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.243182    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.246637    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:25.247243    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:25.247251    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.247257    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.247261    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.248791    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:25.743144    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:25.743185    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.743194    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.743200    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.745534    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:25.746096    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:25.746104    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:25.746110    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:25.746114    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:25.747777    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:26.243397    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:26.243422    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.243434    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.243439    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.246724    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:26.247251    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:26.247258    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.247264    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.247267    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.248850    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:26.743796    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:26.743812    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.743818    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.743822    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.746038    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:26.746535    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:26.746543    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:26.746548    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:26.746552    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:26.748223    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:27.243865    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:27.243907    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.243915    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.243921    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.246152    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:27.246675    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:27.246682    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.246690    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.246694    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.248406    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:27.248807    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:27.743171    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:27.743187    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.743194    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.743198    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.745500    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:27.745988    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:27.745997    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:27.746002    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:27.746006    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:27.748595    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:28.242282    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:28.242301    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.242313    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.242319    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.245501    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:28.246247    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:28.246255    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.246261    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.246264    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.247902    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:28.743212    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:28.743236    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.743249    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.743260    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.746405    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:28.747013    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:28.747024    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:28.747033    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:28.747036    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:28.748962    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:29.242696    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:29.242721    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.242759    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.242768    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.246203    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:29.246735    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:29.246743    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.246748    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.246751    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.248540    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:29.248873    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:29.742874    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:29.742909    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.742916    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.742920    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.745853    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:29.746241    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:29.746248    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:29.746254    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:29.746258    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:29.747886    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:30.242344    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:30.242398    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.242412    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.242417    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.245482    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:30.246231    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:30.246239    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.246243    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.246249    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.247931    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:30.743687    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:30.743739    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.743748    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.743754    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.746284    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:30.746897    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:30.746904    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:30.746910    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:30.746919    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:30.748657    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.242762    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:31.242802    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.242815    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.242821    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.244879    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:31.245288    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:31.245296    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.245302    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.245305    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.246940    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.744167    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:31.744190    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.744201    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.744210    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.747694    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:31.748330    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:31.748354    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:31.748359    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:31.748363    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:31.750021    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:31.750280    4318 pod_ready.go:103] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 10:24:32.243257    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:32.243276    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.243287    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.243295    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.246666    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:32.247294    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:32.247301    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.247307    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.247315    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.249071    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:32.742445    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:32.742465    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.742477    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.742486    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.745063    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:32.745573    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:32.745581    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:32.745586    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:32.745590    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:32.747244    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.242932    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:33.242948    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.242957    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.242960    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.245698    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:33.246162    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.246170    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.246176    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.246180    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.248030    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.743607    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-744000-m03
	I0917 10:24:33.743630    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.743677    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.743686    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.747091    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:33.747696    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.747706    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.747715    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.747721    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.749482    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.749881    4318 pod_ready.go:93] pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.749891    4318 pod_ready.go:82] duration metric: took 13.507764282s for pod "kube-apiserver-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.749898    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.749929    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000
	I0917 10:24:33.749934    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.749939    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.749944    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.751607    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.752009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:33.752016    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.752022    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.752026    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.753479    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.753776    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.753784    4318 pod_ready.go:82] duration metric: took 3.88171ms for pod "kube-controller-manager-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.753790    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.753823    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m02
	I0917 10:24:33.753827    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.753833    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.753838    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.755454    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.755911    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:33.755918    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.755924    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.755927    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.757319    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.757679    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.757688    4318 pod_ready.go:82] duration metric: took 3.892056ms for pod "kube-controller-manager-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.757694    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.757728    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-744000-m03
	I0917 10:24:33.757735    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.757741    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.757744    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.759325    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.759692    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:33.759699    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.759705    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.759708    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.761363    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.761694    4318 pod_ready.go:93] pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.761703    4318 pod_ready.go:82] duration metric: took 4.003379ms for pod "kube-controller-manager-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.761709    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.761744    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-66bkb
	I0917 10:24:33.761749    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.761754    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.761759    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.763321    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.763721    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m04
	I0917 10:24:33.763727    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.763733    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.763737    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.765414    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:33.765712    4318 pod_ready.go:93] pod "kube-proxy-66bkb" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:33.765720    4318 pod_ready.go:82] duration metric: took 4.007111ms for pod "kube-proxy-66bkb" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.765726    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:33.944183    4318 request.go:632] Waited for 178.404523ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:24:33.944229    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xd2h
	I0917 10:24:33.944237    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:33.944268    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:33.944273    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:33.946730    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.143628    4318 request.go:632] Waited for 196.302632ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:34.143662    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:34.143667    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.143673    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.143676    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.145586    4318 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 10:24:34.145943    4318 pod_ready.go:93] pod "kube-proxy-6xd2h" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.145952    4318 pod_ready.go:82] duration metric: took 380.218476ms for pod "kube-proxy-6xd2h" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.145958    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.343736    4318 request.go:632] Waited for 197.699564ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:24:34.343783    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5xbc
	I0917 10:24:34.343789    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.343820    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.343834    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.346285    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.544565    4318 request.go:632] Waited for 197.654167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:34.544605    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:34.544613    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.544621    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.544627    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.547228    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:34.547536    4318 pod_ready.go:93] pod "kube-proxy-c5xbc" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.547544    4318 pod_ready.go:82] duration metric: took 401.579042ms for pod "kube-proxy-c5xbc" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.547551    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.745694    4318 request.go:632] Waited for 198.04491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:24:34.745741    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9xsp
	I0917 10:24:34.745751    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.745761    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.745768    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.749007    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:34.944446    4318 request.go:632] Waited for 194.709353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:34.944508    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:34.944519    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:34.944530    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:34.944538    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:34.948023    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:34.948529    4318 pod_ready.go:93] pod "kube-proxy-k9xsp" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:34.948539    4318 pod_ready.go:82] duration metric: took 400.98043ms for pod "kube-proxy-k9xsp" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:34.948546    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.144352    4318 request.go:632] Waited for 195.670277ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:24:35.144418    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000
	I0917 10:24:35.144427    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.144435    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.144444    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.148047    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:35.345672    4318 request.go:632] Waited for 197.054602ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:35.345814    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000
	I0917 10:24:35.345826    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.345837    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.345847    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.350008    4318 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 10:24:35.350440    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:35.350449    4318 pod_ready.go:82] duration metric: took 401.89555ms for pod "kube-scheduler-ha-744000" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.350455    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.545736    4318 request.go:632] Waited for 195.218553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:24:35.545818    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m02
	I0917 10:24:35.545826    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.545834    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.545838    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.548444    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:35.743956    4318 request.go:632] Waited for 195.068268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:35.744009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m02
	I0917 10:24:35.744018    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.744069    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.744076    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.747579    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:35.748084    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:35.748097    4318 pod_ready.go:82] duration metric: took 397.633311ms for pod "kube-scheduler-ha-744000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.748105    4318 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:35.943849    4318 request.go:632] Waited for 195.677443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:35.943994    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:35.944005    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:35.944016    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:35.944023    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:35.947546    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.144032    4318 request.go:632] Waited for 195.696928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.144124    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.144136    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.144152    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.144160    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.147113    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:36.344824    4318 request.go:632] Waited for 96.483405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.344983    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.344994    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.345004    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.345015    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.348529    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.544910    4318 request.go:632] Waited for 195.649777ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.545008    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.545020    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.545031    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.545037    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.548104    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.748291    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:36.748355    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.748369    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.748376    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.751622    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:36.945151    4318 request.go:632] Waited for 192.867405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.945191    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:36.945197    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:36.945223    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:36.945245    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:36.948349    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.249285    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-744000-m03
	I0917 10:24:37.249335    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.249350    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.249356    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.252559    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.344915    4318 request.go:632] Waited for 91.666148ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:37.345009    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-744000-m03
	I0917 10:24:37.345019    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.345029    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.345039    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.348586    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.348906    4318 pod_ready.go:93] pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 10:24:37.348918    4318 pod_ready.go:82] duration metric: took 1.600795502s for pod "kube-scheduler-ha-744000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 10:24:37.348928    4318 pod_ready.go:39] duration metric: took 18.335907637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 10:24:37.348941    4318 api_server.go:52] waiting for apiserver process to appear ...
	I0917 10:24:37.349014    4318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:24:37.361991    4318 api_server.go:72] duration metric: took 18.519766947s to wait for apiserver process to appear ...
	I0917 10:24:37.362004    4318 api_server.go:88] waiting for apiserver healthz status ...
	I0917 10:24:37.362016    4318 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 10:24:37.365142    4318 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 10:24:37.365173    4318 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 10:24:37.365178    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.365184    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.365188    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.365770    4318 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 10:24:37.365800    4318 api_server.go:141] control plane version: v1.31.1
	I0917 10:24:37.365807    4318 api_server.go:131] duration metric: took 3.798093ms to wait for apiserver health ...
	I0917 10:24:37.365812    4318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 10:24:37.544057    4318 request.go:632] Waited for 178.188238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.544191    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.544207    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.544224    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.544234    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.549291    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:37.554725    4318 system_pods.go:59] 26 kube-system pods found
	I0917 10:24:37.554740    4318 system_pods.go:61] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.554746    4318 system_pods.go:61] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.554752    4318 system_pods.go:61] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:24:37.554756    4318 system_pods.go:61] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running
	I0917 10:24:37.554759    4318 system_pods.go:61] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:24:37.554761    4318 system_pods.go:61] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:24:37.554764    4318 system_pods.go:61] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:24:37.554769    4318 system_pods.go:61] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running
	I0917 10:24:37.554772    4318 system_pods.go:61] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:24:37.554774    4318 system_pods.go:61] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:24:37.554778    4318 system_pods.go:61] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running
	I0917 10:24:37.554781    4318 system_pods.go:61] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:24:37.554784    4318 system_pods.go:61] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:24:37.554787    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running
	I0917 10:24:37.554791    4318 system_pods.go:61] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:24:37.554794    4318 system_pods.go:61] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:24:37.554797    4318 system_pods.go:61] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:24:37.554800    4318 system_pods.go:61] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:24:37.554802    4318 system_pods.go:61] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running
	I0917 10:24:37.554805    4318 system_pods.go:61] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:24:37.554808    4318 system_pods.go:61] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running
	I0917 10:24:37.554811    4318 system_pods.go:61] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:24:37.554813    4318 system_pods.go:61] "kube-vip-ha-744000" [bcb8c990-8b77-4e1d-bf96-614e9da8bf60] Running
	I0917 10:24:37.554816    4318 system_pods.go:61] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:24:37.554818    4318 system_pods.go:61] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:24:37.554821    4318 system_pods.go:61] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:24:37.554825    4318 system_pods.go:74] duration metric: took 189.008209ms to wait for pod list to return data ...
	I0917 10:24:37.554830    4318 default_sa.go:34] waiting for default service account to be created ...
	I0917 10:24:37.744848    4318 request.go:632] Waited for 189.951036ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:24:37.744937    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 10:24:37.744950    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.744962    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.744968    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.748818    4318 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 10:24:37.748898    4318 default_sa.go:45] found service account: "default"
	I0917 10:24:37.748910    4318 default_sa.go:55] duration metric: took 194.07297ms for default service account to be created ...
	I0917 10:24:37.748917    4318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 10:24:37.945360    4318 request.go:632] Waited for 196.381657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.945493    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 10:24:37.945504    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:37.945515    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:37.945524    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:37.951048    4318 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 10:24:37.956873    4318 system_pods.go:86] 26 kube-system pods found
	I0917 10:24:37.956886    4318 system_pods.go:89] "coredns-7c65d6cfc9-j9jcc" [9dee1b9e-42cf-42e2-b53b-3b77c6884b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.956893    4318 system_pods.go:89] "coredns-7c65d6cfc9-khnlh" [bfb8e428-55de-48e2-bea4-23d0550429ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 10:24:37.956898    4318 system_pods.go:89] "etcd-ha-744000" [f3395eb8-7b48-4b00-83a2-b2fa7f7b346e] Running
	I0917 10:24:37.956901    4318 system_pods.go:89] "etcd-ha-744000-m02" [06620cf2-3cd6-4d65-a93e-a06bc73cbfec] Running
	I0917 10:24:37.956905    4318 system_pods.go:89] "etcd-ha-744000-m03" [484a01c2-8847-41a7-bbad-3cac503800b7] Running
	I0917 10:24:37.956908    4318 system_pods.go:89] "kindnet-bdjj4" [ef84f2d4-bb25-4791-9c63-2ebd378fffce] Running
	I0917 10:24:37.956910    4318 system_pods.go:89] "kindnet-c59lr" [b8c667b1-4d2e-48d1-b667-be0a602aaca3] Running
	I0917 10:24:37.956915    4318 system_pods.go:89] "kindnet-r77t5" [184431bd-17fd-41e5-86bb-6213b4be89b6] Running
	I0917 10:24:37.956918    4318 system_pods.go:89] "kindnet-wqkz7" [7e9ecf5e-795d-401b-91e5-7b713e07415f] Running
	I0917 10:24:37.956921    4318 system_pods.go:89] "kube-apiserver-ha-744000" [2f01f48c-5749-4e73-aa43-07d963238201] Running
	I0917 10:24:37.956927    4318 system_pods.go:89] "kube-apiserver-ha-744000-m02" [ddfb6abd-2e7f-46b2-838a-27c2b954c172] Running
	I0917 10:24:37.956931    4318 system_pods.go:89] "kube-apiserver-ha-744000-m03" [55f5859f-d639-4319-b54a-f29a6b63ee10] Running
	I0917 10:24:37.956933    4318 system_pods.go:89] "kube-controller-manager-ha-744000" [452feaf3-8d4d-4eec-b02c-3c10f417496a] Running
	I0917 10:24:37.956939    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m02" [34e5bdf1-892c-448a-8211-71250914c702] Running
	I0917 10:24:37.956943    4318 system_pods.go:89] "kube-controller-manager-ha-744000-m03" [154abb75-b9c8-41af-84c3-5bf98e3eeb36] Running
	I0917 10:24:37.956945    4318 system_pods.go:89] "kube-proxy-66bkb" [7821858b-abb3-4eb3-9046-f58a13f48267] Running
	I0917 10:24:37.956948    4318 system_pods.go:89] "kube-proxy-6xd2h" [a4ef0490-24b0-4b96-8760-4c14f6f14f30] Running
	I0917 10:24:37.956951    4318 system_pods.go:89] "kube-proxy-c5xbc" [46d93318-6e9e-4eb7-ab29-d4160ed7530c] Running
	I0917 10:24:37.956954    4318 system_pods.go:89] "kube-proxy-k9xsp" [1eb4370d-e8ff-429d-be17-80f938972889] Running
	I0917 10:24:37.956957    4318 system_pods.go:89] "kube-scheduler-ha-744000" [e3ccdd5b-d861-4968-86b3-49b496f39f03] Running
	I0917 10:24:37.956960    4318 system_pods.go:89] "kube-scheduler-ha-744000-m02" [aeb7e010-3c1e-4fc4-927c-dde8c8e0f093] Running
	I0917 10:24:37.956962    4318 system_pods.go:89] "kube-scheduler-ha-744000-m03" [7de6e8a5-5073-4023-8915-fea59777a43d] Running
	I0917 10:24:37.956966    4318 system_pods.go:89] "kube-vip-ha-744000" [bcb8c990-8b77-4e1d-bf96-614e9da8bf60] Running
	I0917 10:24:37.956968    4318 system_pods.go:89] "kube-vip-ha-744000-m02" [1ea5797a-c611-4353-9d8e-4675bc626ff1] Running
	I0917 10:24:37.956972    4318 system_pods.go:89] "kube-vip-ha-744000-m03" [1273932d-f15c-4e02-9dc3-07aa96dd108f] Running
	I0917 10:24:37.956975    4318 system_pods.go:89] "storage-provisioner" [9c968c58-13fc-40ef-8098-3b66787272db] Running
	I0917 10:24:37.956980    4318 system_pods.go:126] duration metric: took 208.057925ms to wait for k8s-apps to be running ...
	I0917 10:24:37.956985    4318 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 10:24:37.957044    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:24:37.968066    4318 system_svc.go:56] duration metric: took 11.076755ms WaitForService to wait for kubelet
	I0917 10:24:37.968081    4318 kubeadm.go:582] duration metric: took 19.125854064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:24:37.968093    4318 node_conditions.go:102] verifying NodePressure condition ...
	I0917 10:24:38.144749    4318 request.go:632] Waited for 176.615288ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 10:24:38.144801    4318 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 10:24:38.144806    4318 round_trippers.go:469] Request Headers:
	I0917 10:24:38.144812    4318 round_trippers.go:473]     Accept: application/json, */*
	I0917 10:24:38.144819    4318 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 10:24:38.147413    4318 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 10:24:38.148237    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148247    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148254    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148257    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148261    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148265    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148268    4318 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 10:24:38.148271    4318 node_conditions.go:123] node cpu capacity is 2
	I0917 10:24:38.148274    4318 node_conditions.go:105] duration metric: took 180.176513ms to run NodePressure ...
	I0917 10:24:38.148284    4318 start.go:241] waiting for startup goroutines ...
	I0917 10:24:38.148299    4318 start.go:255] writing updated cluster config ...
	I0917 10:24:38.170792    4318 out.go:201] 
	I0917 10:24:38.192139    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:24:38.192258    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.214598    4318 out.go:177] * Starting "ha-744000-m04" worker node in "ha-744000" cluster
	I0917 10:24:38.256637    4318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:24:38.256664    4318 cache.go:56] Caching tarball of preloaded images
	I0917 10:24:38.256839    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:24:38.256857    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:24:38.256981    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.257985    4318 start.go:360] acquireMachinesLock for ha-744000-m04: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:24:38.258078    4318 start.go:364] duration metric: took 72.145µs to acquireMachinesLock for "ha-744000-m04"
	I0917 10:24:38.258103    4318 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:24:38.258112    4318 fix.go:54] fixHost starting: m04
	I0917 10:24:38.258540    4318 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:24:38.258566    4318 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:24:38.268106    4318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51981
	I0917 10:24:38.268448    4318 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:24:38.268812    4318 main.go:141] libmachine: Using API Version  1
	I0917 10:24:38.268827    4318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:24:38.269077    4318 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:24:38.269188    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:24:38.269289    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetState
	I0917 10:24:38.269369    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.269469    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 3930
	I0917 10:24:38.270534    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid 3930 missing from process table
	I0917 10:24:38.270552    4318 fix.go:112] recreateIfNeeded on ha-744000-m04: state=Stopped err=<nil>
	I0917 10:24:38.270560    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	W0917 10:24:38.270638    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:24:38.291868    4318 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m04" ...
	I0917 10:24:38.333636    4318 main.go:141] libmachine: (ha-744000-m04) Calling .Start
	I0917 10:24:38.333893    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.333997    4318 main.go:141] libmachine: (ha-744000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid
	I0917 10:24:38.334050    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Using UUID a75a0481-aaf0-49d3-9d6e-de3c56706456
	I0917 10:24:38.361417    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Generated MAC b6:cf:5d:a2:4f:b0
	I0917 10:24:38.361439    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:24:38.361574    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75a0481-aaf0-49d3-9d6e-de3c56706456", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f6270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:24:38.361608    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75a0481-aaf0-49d3-9d6e-de3c56706456", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f6270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:24:38.361683    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a75a0481-aaf0-49d3-9d6e-de3c56706456", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/ha-744000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:24:38.361733    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a75a0481-aaf0-49d3-9d6e-de3c56706456 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/ha-744000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:24:38.361747    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:24:38.363077    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 DEBUG: hyperkit: Pid is 4356
	I0917 10:24:38.363455    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Attempt 0
	I0917 10:24:38.363472    4318 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:24:38.363519    4318 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 4356
	I0917 10:24:38.365806    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Searching for b6:cf:5d:a2:4f:b0 in /var/db/dhcpd_leases ...
	I0917 10:24:38.365879    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:24:38.365922    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66eb0cb7}
	I0917 10:24:38.365937    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:24:38.365950    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:24:38.365959    4318 main.go:141] libmachine: (ha-744000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66e9bade}
	I0917 10:24:38.365986    4318 main.go:141] libmachine: (ha-744000-m04) DBG | Found match: b6:cf:5d:a2:4f:b0
	I0917 10:24:38.365994    4318 main.go:141] libmachine: (ha-744000-m04) DBG | IP: 192.169.0.8
	I0917 10:24:38.366035    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetConfigRaw
	I0917 10:24:38.366790    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:24:38.367002    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:24:38.367474    4318 machine.go:93] provisionDockerMachine start ...
	I0917 10:24:38.367487    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:24:38.367618    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:24:38.367733    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:24:38.367825    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:24:38.367932    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:24:38.368026    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:24:38.368135    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:24:38.368308    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:24:38.368315    4318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:24:38.371140    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:24:38.380744    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:24:38.381595    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:24:38.381618    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:24:38.381626    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:24:38.381634    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:24:38.766023    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:24:38.766038    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:24:38.880838    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:24:38.880856    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:24:38.880875    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:24:38.880896    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:24:38.881691    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:24:38.881699    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:24:44.498444    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 10:24:44.498459    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 10:24:44.498494    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 10:24:44.523076    4318 main.go:141] libmachine: (ha-744000-m04) DBG | 2024/09/17 10:24:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 10:25:13.428240    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:25:13.428258    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.428409    4318 buildroot.go:166] provisioning hostname "ha-744000-m04"
	I0917 10:25:13.428420    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.428514    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.428620    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.428723    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.428810    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.428889    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.429066    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.429209    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.429217    4318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m04 && echo "ha-744000-m04" | sudo tee /etc/hostname
	I0917 10:25:13.489074    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m04
	
	I0917 10:25:13.489089    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.489213    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.489306    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.489396    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.489496    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.489633    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.489780    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.489791    4318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:25:13.545140    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:25:13.545156    4318 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:25:13.545164    4318 buildroot.go:174] setting up certificates
	I0917 10:25:13.545177    4318 provision.go:84] configureAuth start
	I0917 10:25:13.545184    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetMachineName
	I0917 10:25:13.545313    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:25:13.545408    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.545491    4318 provision.go:143] copyHostCerts
	I0917 10:25:13.545519    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:25:13.545566    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:25:13.545572    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:25:13.545709    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:25:13.545914    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:25:13.545947    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:25:13.545952    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:25:13.546020    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:25:13.546170    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:25:13.546203    4318 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:25:13.546208    4318 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:25:13.546273    4318 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:25:13.546422    4318 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m04 san=[127.0.0.1 192.169.0.8 ha-744000-m04 localhost minikube]
	I0917 10:25:13.728947    4318 provision.go:177] copyRemoteCerts
	I0917 10:25:13.729001    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:25:13.729019    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.729159    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.729267    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.729352    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.729436    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:13.760341    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:25:13.760415    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:25:13.780212    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:25:13.780295    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:25:13.799969    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:25:13.800048    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:25:13.820126    4318 provision.go:87] duration metric: took 274.938832ms to configureAuth
	I0917 10:25:13.820140    4318 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:25:13.820316    4318 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:25:13.820363    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:13.820492    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.820577    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.820675    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.820756    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.820822    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.820952    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.821086    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.821093    4318 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:25:13.869340    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:25:13.869359    4318 buildroot.go:70] root file system type: tmpfs
	I0917 10:25:13.869441    4318 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:25:13.869457    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.869595    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.869683    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.869771    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.869861    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.870006    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.870149    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.870194    4318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:25:13.929484    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:25:13.929501    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:13.929632    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:13.929718    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.929806    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:13.929887    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:13.930023    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:13.930160    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:13.930175    4318 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:25:15.508327    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:25:15.508343    4318 machine.go:96] duration metric: took 37.140625742s to provisionDockerMachine
	I0917 10:25:15.508350    4318 start.go:293] postStartSetup for "ha-744000-m04" (driver="hyperkit")
	I0917 10:25:15.508359    4318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:25:15.508370    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.508567    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:25:15.508581    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.508684    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.508771    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.508863    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.508959    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.539960    4318 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:25:15.543053    4318 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:25:15.543063    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:25:15.543160    4318 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:25:15.543298    4318 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:25:15.543305    4318 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:25:15.543461    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:25:15.551517    4318 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:25:15.570767    4318 start.go:296] duration metric: took 62.406299ms for postStartSetup
	I0917 10:25:15.570789    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.570981    4318 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:25:15.570995    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.571091    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.571171    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.571256    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.571333    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.602758    4318 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:25:15.602836    4318 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:25:15.637575    4318 fix.go:56] duration metric: took 37.37922575s for fixHost
	I0917 10:25:15.637622    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.637768    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.637924    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.638031    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.638176    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.638325    4318 main.go:141] libmachine: Using SSH client type: native
	I0917 10:25:15.638471    4318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10858820] 0x1085b500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 10:25:15.638479    4318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:25:15.688928    4318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593915.722853111
	
	I0917 10:25:15.688940    4318 fix.go:216] guest clock: 1726593915.722853111
	I0917 10:25:15.688945    4318 fix.go:229] Guest: 2024-09-17 10:25:15.722853111 -0700 PDT Remote: 2024-09-17 10:25:15.63759 -0700 PDT m=+131.293327303 (delta=85.263111ms)
	I0917 10:25:15.688955    4318 fix.go:200] guest clock delta is within tolerance: 85.263111ms
	I0917 10:25:15.688959    4318 start.go:83] releasing machines lock for "ha-744000-m04", held for 37.430633857s
	I0917 10:25:15.688978    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.689103    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:25:15.710671    4318 out.go:177] * Found network options:
	I0917 10:25:15.731491    4318 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0917 10:25:15.753310    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.753333    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.753342    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:25:15.753356    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.753871    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.754022    4318 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:25:15.754119    4318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:25:15.754146    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	W0917 10:25:15.754178    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.754208    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 10:25:15.754223    4318 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:25:15.754296    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.754303    4318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:25:15.754334    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:25:15.754432    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:25:15.754453    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.754575    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:25:15.754604    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.754689    4318 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:25:15.754711    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:25:15.754792    4318 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	W0917 10:25:15.782647    4318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:25:15.782713    4318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:25:15.824742    4318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:25:15.824761    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:25:15.824849    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:25:15.840222    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:25:15.849242    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:25:15.858317    4318 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:25:15.858387    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:25:15.867462    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:25:15.875738    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:25:15.884682    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:25:15.893510    4318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:25:15.902446    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:25:15.911295    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:25:15.919994    4318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:25:15.928900    4318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:25:15.936904    4318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:25:15.944894    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:25:16.041231    4318 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:25:16.060721    4318 start.go:495] detecting cgroup driver to use...
	I0917 10:25:16.060799    4318 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:25:16.080747    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:25:16.095004    4318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:25:16.114244    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:25:16.125786    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:25:16.137258    4318 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:25:16.158423    4318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:25:16.170393    4318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:25:16.185414    4318 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:25:16.188334    4318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:25:16.196827    4318 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:25:16.210659    4318 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:25:16.305554    4318 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:25:16.409957    4318 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:25:16.409982    4318 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:25:16.425083    4318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:25:16.535715    4318 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:26:17.562416    4318 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.026297453s)
	I0917 10:26:17.562497    4318 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:26:17.630222    4318 out.go:201] 
	W0917 10:26:17.651239    4318 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:25:13 ha-744000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.456528847Z" level=info msg="Starting up"
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.457229245Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:25:13 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:13.457756278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=515
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.475582216Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490758453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490898800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.490976043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491011334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491152047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491195568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491328519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491366944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491397636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491431172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491542048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.491732624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493310341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493359335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493488280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493534970Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493652714Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.493714896Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494789743Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494871313Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494917161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494950579Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.494983897Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495053063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495291226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495375682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495419457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495464742Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495500431Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495531945Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495563543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495597416Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495628537Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495658774Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495687956Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495720478Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495838245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495897691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495950377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.495999910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496037282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496068360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496098684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496129402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496180048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496224888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496258746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496292925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496328738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496361060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496398155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496429539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496458278Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496532105Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496577809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496631209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496668767Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496701760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496732507Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496764331Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.496955260Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497045520Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497161388Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:25:13 ha-744000-m04 dockerd[515]: time="2024-09-17T17:25:13.497218646Z" level=info msg="containerd successfully booted in 0.022496s"
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.478225250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.497615871Z" level=info msg="Loading containers: start."
	Sep 17 17:25:14 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:14.589404703Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.466302251Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.511791263Z" level=info msg="Loading containers: done."
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.521663721Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.521829028Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.541037196Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:25:15 ha-744000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:25:15 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:15.542461858Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:25:16 ha-744000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.587552960Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588424393Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588788736Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588860910Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:25:16 ha-744000-m04 dockerd[509]: time="2024-09-17T17:25:16.588877844Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:25:17 ha-744000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:25:17 ha-744000-m04 dockerd[1095]: time="2024-09-17T17:25:17.626813653Z" level=info msg="Starting up"
	Sep 17 17:26:17 ha-744000-m04 dockerd[1095]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:26:17 ha-744000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:26:17.651325    4318 out.go:270] * 
	W0917 10:26:17.652544    4318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:26:17.714012    4318 out.go:201] 
	
	
	==> Docker <==
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.268707916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.281047915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.281247421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.281280865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.281415634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.306942894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.307034217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.307049248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.307123216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.345168645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.345400515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.345417057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.345534846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.371315730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.371503024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.371534239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:20 ha-744000 dockerd[1165]: time="2024-09-17T17:24:20.371698549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:24:50 ha-744000 dockerd[1165]: time="2024-09-17T17:24:50.911074437Z" level=info msg="shim disconnected" id=8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2 namespace=moby
	Sep 17 17:24:50 ha-744000 dockerd[1165]: time="2024-09-17T17:24:50.911145697Z" level=warning msg="cleaning up after shim disconnected" id=8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2 namespace=moby
	Sep 17 17:24:50 ha-744000 dockerd[1165]: time="2024-09-17T17:24:50.911154909Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:24:50 ha-744000 dockerd[1159]: time="2024-09-17T17:24:50.911891905Z" level=info msg="ignoring event" container=8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:25:06 ha-744000 dockerd[1165]: time="2024-09-17T17:25:06.183917900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:25:06 ha-744000 dockerd[1165]: time="2024-09-17T17:25:06.184095170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:25:06 ha-744000 dockerd[1165]: time="2024-09-17T17:25:06.184121704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:25:06 ha-744000 dockerd[1165]: time="2024-09-17T17:25:06.184219800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1b95d7a1c7708       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   375cde06a4bcf       storage-provisioner
	079da006755a7       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   f0eee6e67fe42       busybox-7dff88458-cn52t
	9f76145e8eaf7       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   8b4b5191649e7       kindnet-c59lr
	6a4aba3acb1e9       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   3888ce04e78db       coredns-7c65d6cfc9-khnlh
	8fea3c0c8d014       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   375cde06a4bcf       storage-provisioner
	fb8b83fe49a6e       60c005f310ff3                                                                                         2 minutes ago        Running             kube-proxy                1                   f1782d63db94f       kube-proxy-6xd2h
	24cfd031ec879       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   244f5bc456efc       coredns-7c65d6cfc9-j9jcc
	12b3b4eba9d4b       175ffd71cce3d                                                                                         2 minutes ago        Running             kube-controller-manager   2                   1ec7133566130       kube-controller-manager-ha-744000
	cfbfd57cf2b56       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  0                   433c480eea542       kube-vip-ha-744000
	2e26c6d8d6f01       6bab7719df100                                                                                         3 minutes ago        Running             kube-apiserver            1                   17c507064e8cf       kube-apiserver-ha-744000
	e2a0b2a78de14       175ffd71cce3d                                                                                         3 minutes ago        Exited              kube-controller-manager   1                   1ec7133566130       kube-controller-manager-ha-744000
	a7645ef2ae8dd       9aa1fad941575                                                                                         3 minutes ago        Running             kube-scheduler            1                   fbf79ae31cbab       kube-scheduler-ha-744000
	23a7e0d95a77c       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      1                   55cb3d05ddf34       etcd-ha-744000
	2d870e01d6884       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   35535e8fc0b28       busybox-7dff88458-cn52t
	483eb8f98687f       c69fa2e9cbf5f                                                                                         8 minutes ago        Exited              coredns                   0                   8108990228d29       coredns-7c65d6cfc9-khnlh
	916943d59881d       c69fa2e9cbf5f                                                                                         8 minutes ago        Exited              coredns                   0                   804209193fefd       coredns-7c65d6cfc9-j9jcc
	c585358c16494       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              8 minutes ago        Exited              kindnet-cni               0                   1b8517a154f2d       kindnet-c59lr
	8b4d53aa2a212       60c005f310ff3                                                                                         8 minutes ago        Exited              kube-proxy                0                   7026bc0d7935b       kube-proxy-6xd2h
	b88f9e96fc4a3       9aa1fad941575                                                                                         8 minutes ago        Exited              kube-scheduler            0                   26a4b719c81b3       kube-scheduler-ha-744000
	8d4b19b4762b9       2e96e5913fc06                                                                                         8 minutes ago        Exited              etcd                      0                   d38e9fc592fbb       etcd-ha-744000
	0468a8663a15a       6bab7719df100                                                                                         8 minutes ago        Exited              kube-apiserver            0                   183fe28646c54       kube-apiserver-ha-744000
	
	
	==> coredns [24cfd031ec87] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52682 - 33898 "HINFO IN 2709939145458862568.721558315158165230. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009931439s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[318103159]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.683) (total time: 30003ms):
	Trace[318103159]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:24:50.686)
	Trace[318103159]: [30.003131559s] [30.003131559s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1979128092]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1979128092]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1979128092]: [30.000652416s] [30.000652416s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1978210991]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1978210991]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1978210991]: [30.000766886s] [30.000766886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [483eb8f98687] <==
	[INFO] 10.244.0.4:49921 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001090777s
	[INFO] 10.244.0.4:38072 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093692s
	[INFO] 10.244.0.4:52268 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010201s
	[INFO] 10.244.0.4:39332 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065274s
	[INFO] 10.244.1.2:50067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097272s
	[INFO] 10.244.1.2:59778 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076291s
	[INFO] 10.244.1.2:40527 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006494s
	[INFO] 10.244.1.2:55267 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103302s
	[INFO] 10.244.1.2:48936 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076215s
	[INFO] 10.244.2.2:35568 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075643s
	[INFO] 10.244.2.2:33950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075232s
	[INFO] 10.244.0.4:34208 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090644s
	[INFO] 10.244.0.4:48674 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132902s
	[INFO] 10.244.0.4:33737 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008542s
	[INFO] 10.244.0.4:52920 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144911s
	[INFO] 10.244.1.2:35106 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080607s
	[INFO] 10.244.1.2:56698 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084976s
	[INFO] 10.244.2.2:34296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174512s
	[INFO] 10.244.2.2:33488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117345s
	[INFO] 10.244.0.4:38670 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010498s
	[INFO] 10.244.0.4:40491 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111462s
	[INFO] 10.244.0.4:48717 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000119132s
	[INFO] 10.244.2.2:47158 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110576s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a4aba3acb1e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60360 - 19575 "HINFO IN 3607648931521447410.3411894034218696920. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009401347s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1960564509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1960564509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.746)
	Trace[1960564509]: [30.00213331s] [30.00213331s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1197674287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1197674287]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[1197674287]: [30.002759704s] [30.002759704s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[633118280]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30003ms):
	Trace[633118280]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[633118280]: [30.003193097s] [30.003193097s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [916943d59881] <==
	[INFO] 10.244.0.4:37739 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098877s
	[INFO] 10.244.0.4:40547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091048s
	[INFO] 10.244.1.2:44593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150145s
	[INFO] 10.244.1.2:56172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000115318s
	[INFO] 10.244.1.2:39487 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042632s
	[INFO] 10.244.2.2:45820 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136035s
	[INFO] 10.244.2.2:45888 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124378s
	[INFO] 10.244.2.2:33921 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103985s
	[INFO] 10.244.2.2:43324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000079133s
	[INFO] 10.244.2.2:40281 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099458s
	[INFO] 10.244.2.2:55515 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064602s
	[INFO] 10.244.1.2:35470 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094431s
	[INFO] 10.244.1.2:39318 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101905s
	[INFO] 10.244.2.2:33069 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125468s
	[INFO] 10.244.2.2:58055 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005s
	[INFO] 10.244.0.4:42955 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119337s
	[INFO] 10.244.1.2:56148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133985s
	[INFO] 10.244.1.2:41074 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070637s
	[INFO] 10.244.1.2:57011 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097568s
	[INFO] 10.244.1.2:54560 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000088217s
	[INFO] 10.244.2.2:40699 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009838s
	[INFO] 10.244.2.2:56915 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009188s
	[INFO] 10.244.2.2:59087 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000063136s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-744000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-744000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-744000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T10_18_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:18:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-744000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:26:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:24:01 +0000   Tue, 17 Sep 2024 17:18:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:24:01 +0000   Tue, 17 Sep 2024 17:18:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:24:01 +0000   Tue, 17 Sep 2024 17:18:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:24:01 +0000   Tue, 17 Sep 2024 17:18:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-744000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e19ab4b42d3d4ad9a9c9862970c0a605
	  System UUID:                bcb541bd-0000-0000-81db-c015832629bb
	  Boot ID:                    3e522cae-7866-41e9-a155-4d8cabdebe35
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cn52t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-j9jcc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m21s
	  kube-system                 coredns-7c65d6cfc9-khnlh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m21s
	  kube-system                 etcd-ha-744000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m25s
	  kube-system                 kindnet-c59lr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m22s
	  kube-system                 kube-apiserver-ha-744000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-controller-manager-ha-744000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-proxy-6xd2h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-scheduler-ha-744000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-vip-ha-744000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m19s                  kube-proxy       
	  Normal  Starting                 2m9s                   kube-proxy       
	  Normal  Starting                 8m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    8m31s (x8 over 8m32s)  kubelet          Node ha-744000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m31s (x8 over 8m32s)  kubelet          Node ha-744000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m31s (x7 over 8m32s)  kubelet          Node ha-744000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m25s                  kubelet          Node ha-744000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m25s                  kubelet          Node ha-744000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m25s                  kubelet          Node ha-744000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m22s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  NodeReady                8m1s                   kubelet          Node ha-744000 status is now: NodeReady
	  Normal  RegisteredNode           7m22s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  Starting                 3m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m7s (x8 over 3m7s)    kubelet          Node ha-744000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x8 over 3m7s)    kubelet          Node ha-744000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x7 over 3m7s)    kubelet          Node ha-744000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m35s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	  Normal  RegisteredNode           2m5s                   node-controller  Node ha-744000 event: Registered Node ha-744000 in Controller
	
	
	Name:               ha-744000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-744000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-744000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T10_19_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:19:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-744000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:26:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:23:54 +0000   Tue, 17 Sep 2024 17:19:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:23:54 +0000   Tue, 17 Sep 2024 17:19:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:23:54 +0000   Tue, 17 Sep 2024 17:19:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:23:54 +0000   Tue, 17 Sep 2024 17:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-744000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 c94aa5595d5f4a1cb88c3b118576895e
	  System UUID:                84414fed-0000-0000-a88c-11fa06a6299e
	  Boot ID:                    11a7e2f2-378b-40ca-b409-09a9376b68fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qcdwg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 etcd-ha-744000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m27s
	  kube-system                 kindnet-r77t5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m29s
	  kube-system                 kube-apiserver-ha-744000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-controller-manager-ha-744000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-proxy-k9xsp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	  kube-system                 kube-scheduler-ha-744000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-vip-ha-744000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 4m11s                  kube-proxy       
	  Normal   Starting                 7m24s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     7m29s                  cidrAllocator    Node ha-744000-m02 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  7m29s (x8 over 7m29s)  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m29s (x8 over 7m29s)  kubelet          Node ha-744000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m29s (x7 over 7m29s)  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m26s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Warning  Rebooted                 4m16s                  kubelet          Node ha-744000-m02 has been rebooted, boot id: 820b0469-454f-41f2-99e6-1215d352a125
	  Normal   Starting                 4m16s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m16s                  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m16s                  kubelet          Node ha-744000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m16s                  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   NodeHasSufficientMemory  2m47s (x8 over 2m47s)  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m47s (x8 over 2m47s)  kubelet          Node ha-744000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x7 over 2m47s)  kubelet          Node ha-744000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m35s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   RegisteredNode           2m20s                  node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	  Normal   RegisteredNode           2m5s                   node-controller  Node ha-744000-m02 event: Registered Node ha-744000-m02 in Controller
	
	
	Name:               ha-744000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-744000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-744000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T10_21_08_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:21:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-744000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:22:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 17 Sep 2024 17:21:37 +0000   Tue, 17 Sep 2024 17:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 17 Sep 2024 17:21:37 +0000   Tue, 17 Sep 2024 17:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 17 Sep 2024 17:21:37 +0000   Tue, 17 Sep 2024 17:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 17 Sep 2024 17:21:37 +0000   Tue, 17 Sep 2024 17:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-744000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 915aa5fe15514f39b3c6acca73576405
	  System UUID:                a75a49d3-0000-0000-9d6e-de3c56706456
	  Boot ID:                    1f7e3f8e-fb14-42e9-8e73-84c9c5c4de7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wqkz7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-proxy-66bkb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m24s (x2 over 5m25s)  kubelet          Node ha-744000-m04 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     5m24s                  cidrAllocator    Node ha-744000-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     5m24s                  cidrAllocator    Node ha-744000-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientPID     5m24s (x2 over 5m25s)  kubelet          Node ha-744000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m24s (x2 over 5m25s)  kubelet          Node ha-744000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  NodeReady                5m2s                   kubelet          Node ha-744000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           2m35s                  node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  RegisteredNode           2m5s                   node-controller  Node ha-744000-m04 event: Registered Node ha-744000-m04 in Controller
	  Normal  NodeNotReady             115s                   node-controller  Node ha-744000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035497] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007988] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.713580] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007483] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.860704] systemd-fstab-generator[126]: Ignoring "noauto" option for root device
	[  +1.323097] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.568053] systemd-fstab-generator[470]: Ignoring "noauto" option for root device
	[  +0.088340] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +1.270007] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.699814] systemd-fstab-generator[1089]: Ignoring "noauto" option for root device
	[  +0.246160] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.113601] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +0.118007] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +2.448707] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.103404] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.099552] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.136889] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.447009] systemd-fstab-generator[1564]: Ignoring "noauto" option for root device
	[  +6.924512] kauditd_printk_skb: 271 callbacks suppressed
	[ +22.054441] kauditd_printk_skb: 40 callbacks suppressed
	[Sep17 17:24] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [23a7e0d95a77] <==
	{"level":"info","ts":"2024-09-17T17:24:20.428661Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:24:20.428974Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:24:20.516533Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"557d957d9f2c237a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-17T17:24:20.516577Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:24:20.547191Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"557d957d9f2c237a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-17T17:24:20.549649Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"warn","ts":"2024-09-17T17:24:21.318628Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"557d957d9f2c237a","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:24:21.318722Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"557d957d9f2c237a","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-17T17:26:25.950383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(4800379958354180231 13314548521573537860)"}
	{"level":"info","ts":"2024-09-17T17:26:25.951072Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"557d957d9f2c237a","removed-remote-peer-urls":["https://192.169.0.7:2380"]}
	{"level":"info","ts":"2024-09-17T17:26:25.951132Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"557d957d9f2c237a"}
	{"level":"warn","ts":"2024-09-17T17:26:25.951663Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:26:25.951734Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"557d957d9f2c237a"}
	{"level":"warn","ts":"2024-09-17T17:26:25.952208Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:26:25.952258Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:26:25.952469Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"warn","ts":"2024-09-17T17:26:25.952674Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a","error":"context canceled"}
	{"level":"warn","ts":"2024-09-17T17:26:25.952718Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"557d957d9f2c237a","error":"failed to read 557d957d9f2c237a on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-17T17:26:25.952773Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"warn","ts":"2024-09-17T17:26:25.952943Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-09-17T17:26:25.953014Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:26:25.953073Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:26:25.953095Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"557d957d9f2c237a"}
	{"level":"warn","ts":"2024-09-17T17:26:25.963059Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b8c6c7563d17d844","remote-peer-id-stream-handler":"b8c6c7563d17d844","remote-peer-id-from":"557d957d9f2c237a"}
	{"level":"warn","ts":"2024-09-17T17:26:25.965900Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.169.0.7:45860","server-name":"","error":"EOF"}
	
	
	==> etcd [8d4b19b4762b] <==
	2024/09/17 17:22:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:22:56.451985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.563767123s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:22:56.451994Z","caller":"traceutil/trace.go:171","msg":"trace[313034458] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; }","duration":"5.563777996s","start":"2024-09-17T17:22:50.888214Z","end":"2024-09-17T17:22:56.451992Z","steps":["trace[313034458] 'agreement among raft nodes before linearized reading'  (duration: 5.563767198s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:22:56.452003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:22:50.888192Z","time spent":"5.563809109s","remote":"127.0.0.1:56830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":0,"request content":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true "}
	2024/09/17 17:22:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:22:56.480665Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:22:56.480691Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T17:22:56.480721Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T17:22:56.480823Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.480834Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.480849Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484198Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484258Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484324Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484373Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:22:56.484412Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.484422Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.484436Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.485143Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.485195Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.485239Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.485269Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"557d957d9f2c237a"}
	{"level":"info","ts":"2024-09-17T17:22:56.489683Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:22:56.489807Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:22:56.489816Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-744000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:26:31 up 3 min,  0 users,  load average: 0.55, 0.38, 0.15
	Linux ha-744000 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9f76145e8eaf] <==
	I0917 17:26:01.508317       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:01.508523       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:01.508659       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:11.511261       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:11.511335       1 main.go:299] handling current node
	I0917 17:26:11.511353       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:11.511367       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:11.512152       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:11.512248       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:11.512772       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:11.512871       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504250       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:21.504302       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504625       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:21.504682       1 main.go:299] handling current node
	I0917 17:26:21.504706       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:21.504715       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:21.504816       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:21.504869       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:31.506309       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:31.506431       1 main.go:299] handling current node
	I0917 17:26:31.506449       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:31.506462       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:31.506621       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:31.506656       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [c585358c1649] <==
	I0917 17:22:25.536515       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:22:35.538913       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:22:35.539072       1 main.go:299] handling current node
	I0917 17:22:35.539097       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:22:35.539156       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:22:35.539326       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:22:35.539442       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:22:35.539599       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:22:35.539711       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:22:45.538117       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:22:45.538187       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:22:45.538682       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:22:45.538745       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:22:45.538817       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:22:45.538826       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:22:45.539211       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:22:45.539277       1 main.go:299] handling current node
	I0917 17:22:55.537919       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:22:55.537958       1 main.go:299] handling current node
	I0917 17:22:55.538082       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:22:55.538164       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:22:55.541016       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:22:55.541068       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:22:55.541176       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:22:55.541204       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [0468a8663a15] <==
	W0917 17:22:56.472516       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472542       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472566       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472591       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472619       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472645       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472671       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472697       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472722       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472761       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 17:22:56.472797       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0917 17:22:56.472868       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00d2f0210)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	E0917 17:22:56.472958       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.478008       1 controller.go:163] "Unhandled Error" err="unable to sync kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	E0917 17:22:56.478339       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.478396       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.478410       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0917 17:22:56.478623       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0917 17:22:56.478774       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00d2f0220)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	E0917 17:22:56.478935       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0917 17:22:56.479083       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0917 17:22:56.479195       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.479244       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:22:56.480492       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	I0917 17:22:56.494593       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [2e26c6d8d6f0] <==
	I0917 17:23:52.297690       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0917 17:23:52.297698       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0917 17:23:52.442975       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 17:23:52.450426       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:23:52.450605       1 policy_source.go:224] refreshing policies
	I0917 17:23:52.475151       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 17:23:52.476021       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 17:23:52.476815       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 17:23:52.476953       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 17:23:52.477453       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 17:23:52.477542       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 17:23:52.479434       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 17:23:52.483086       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 17:23:52.483434       1 aggregator.go:171] initial CRD sync complete...
	I0917 17:23:52.483528       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 17:23:52.483600       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 17:23:52.483707       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:23:52.484124       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0917 17:23:52.486549       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0917 17:23:52.488389       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 17:23:52.492209       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 17:23:52.498932       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 17:23:52.503018       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 17:23:53.290215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 17:23:53.614881       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [12b3b4eba9d4] <==
	I0917 17:24:59.517216       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xhksf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xhksf\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 17:24:59.517627       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e8e8504b-8b6f-4ef7-808e-297a73c11a8b", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xhksf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xhksf": the object has been modified; please apply your changes to the latest version and try again
	I0917 17:24:59.531517       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.700362ms"
	I0917 17:24:59.531770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.529µs"
	I0917 17:24:59.570271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="29.057272ms"
	I0917 17:24:59.570630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.874µs"
	I0917 17:26:22.594212       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-744000-m03"
	I0917 17:26:22.602288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-744000-m03"
	I0917 17:26:22.651433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.671609ms"
	I0917 17:26:22.694137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.649765ms"
	I0917 17:26:22.711793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.608123ms"
	I0917 17:26:22.732915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.965782ms"
	I0917 17:26:22.733011       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.974µs"
	I0917 17:26:22.744528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.156787ms"
	I0917 17:26:22.744755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="169.556µs"
	I0917 17:26:24.767731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.968µs"
	I0917 17:26:25.705907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.114µs"
	I0917 17:26:25.710056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.786µs"
	I0917 17:26:26.705915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-744000-m03"
	E0917 17:26:26.778391       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-744000-m03\", UID:\"220e574e-2eed-4ca1-b50a-77572813c612\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-744000-m03\", UID:\"9e5bd535-2a6d-49e3-98da-573a99d18a8b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-744000-m03\" not found" logger="UnhandledError"
	E0917 17:26:31.563915       1 gc_controller.go:151] "Failed to get node" err="node \"ha-744000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-744000-m03"
	E0917 17:26:31.563997       1 gc_controller.go:151] "Failed to get node" err="node \"ha-744000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-744000-m03"
	E0917 17:26:31.564010       1 gc_controller.go:151] "Failed to get node" err="node \"ha-744000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-744000-m03"
	E0917 17:26:31.564018       1 gc_controller.go:151] "Failed to get node" err="node \"ha-744000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-744000-m03"
	E0917 17:26:31.564026       1 gc_controller.go:151] "Failed to get node" err="node \"ha-744000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-744000-m03"
	
	
	==> kube-controller-manager [e2a0b2a78de1] <==
	I0917 17:23:32.303312       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:23:32.693827       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:23:32.693863       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:23:32.695653       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:23:32.695777       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:23:32.696039       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:23:32.696052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 17:23:52.700852       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [8b4d53aa2a21] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:18:11.351196       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:18:11.358110       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 17:18:11.358182       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:18:11.422753       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:18:11.422782       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:18:11.422800       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:18:11.425522       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:18:11.425930       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:18:11.426022       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:18:11.427003       1 config.go:199] "Starting service config controller"
	I0917 17:18:11.427067       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:18:11.427147       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:18:11.427190       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:18:11.428338       1 config.go:328] "Starting node config controller"
	I0917 17:18:11.428397       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:18:11.529109       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:18:11.529170       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:18:11.529199       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fb8b83fe49a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:24:21.123827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:24:21.146583       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 17:24:21.146876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:24:21.179243       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:24:21.179464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:24:21.179596       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:24:21.183190       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:24:21.184459       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:24:21.184543       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:24:21.188244       1 config.go:199] "Starting service config controller"
	I0917 17:24:21.188350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:24:21.188588       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:24:21.188659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:24:21.192108       1 config.go:328] "Starting node config controller"
	I0917 17:24:21.192216       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:24:21.289888       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:24:21.289903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:24:21.293411       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7645ef2ae8d] <==
	W0917 17:23:52.361884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 17:23:52.361916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.361961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:23:52.361995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:23:52.362165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:23:52.362240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:23:52.362314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:23:52.362490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:23:52.362567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:23:52.362640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:23:52.362799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 17:23:53.372962       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b88f9e96fc4a] <==
	E0917 17:20:36.496971       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c9b889c7-d588-4f6b-b31b-3c8f1e40d87a(default/busybox-7dff88458-qcq64) was assumed on ha-744000-m02 but assigned to ha-744000-m03" pod="default/busybox-7dff88458-qcq64"
	E0917 17:20:36.497061       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qcq64\": pod busybox-7dff88458-qcq64 is already assigned to node \"ha-744000-m03\"" pod="default/busybox-7dff88458-qcq64"
	I0917 17:20:36.497317       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qcq64" node="ha-744000-m03"
	I0917 17:20:36.501897       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="1a86846a-5461-4020-b90c-f3dd17823fa1" pod="default/busybox-7dff88458-zg4mr" assumedNode="ha-744000" currentNode="ha-744000-m03"
	E0917 17:20:36.509943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zg4mr\": pod busybox-7dff88458-zg4mr is already assigned to node \"ha-744000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-zg4mr" node="ha-744000-m03"
	E0917 17:20:36.513236       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1a86846a-5461-4020-b90c-f3dd17823fa1(default/busybox-7dff88458-zg4mr) was assumed on ha-744000-m03 but assigned to ha-744000" pod="default/busybox-7dff88458-zg4mr"
	E0917 17:20:36.513426       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zg4mr\": pod busybox-7dff88458-zg4mr is already assigned to node \"ha-744000\"" pod="default/busybox-7dff88458-zg4mr"
	I0917 17:20:36.513506       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-zg4mr" node="ha-744000"
	E0917 17:21:07.475850       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-66bkb\": pod kube-proxy-66bkb is already assigned to node \"ha-744000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-66bkb" node="ha-744000-m04"
	E0917 17:21:07.476183       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wqkz7\": pod kindnet-wqkz7 is already assigned to node \"ha-744000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wqkz7" node="ha-744000-m04"
	E0917 17:21:07.477315       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7821858b-abb3-4eb3-9046-f58a13f48267(kube-system/kube-proxy-66bkb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-66bkb"
	E0917 17:21:07.477361       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-66bkb\": pod kube-proxy-66bkb is already assigned to node \"ha-744000-m04\"" pod="kube-system/kube-proxy-66bkb"
	I0917 17:21:07.477405       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-66bkb" node="ha-744000-m04"
	E0917 17:21:07.481780       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7e9ecf5e-795d-401b-91e5-7b713e07415f(kube-system/kindnet-wqkz7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-wqkz7"
	E0917 17:21:07.481854       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wqkz7\": pod kindnet-wqkz7 is already assigned to node \"ha-744000-m04\"" pod="kube-system/kindnet-wqkz7"
	I0917 17:21:07.481873       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wqkz7" node="ha-744000-m04"
	E0917 17:21:07.500320       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-njxt8\": pod kindnet-njxt8 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-njxt8" node="ha-744000-m04"
	E0917 17:21:07.500421       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-njxt8\": pod kindnet-njxt8 is being deleted, cannot be assigned to a host" pod="kube-system/kindnet-njxt8"
	E0917 17:21:07.500768       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s4wh8\": pod kube-proxy-s4wh8 is already assigned to node \"ha-744000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s4wh8" node="ha-744000-m04"
	E0917 17:21:07.501164       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s4wh8\": pod kube-proxy-s4wh8 is already assigned to node \"ha-744000-m04\"" pod="kube-system/kube-proxy-s4wh8"
	I0917 17:21:07.501336       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s4wh8" node="ha-744000-m04"
	I0917 17:22:56.486998       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 17:22:56.488377       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 17:22:56.488567       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 17:22:56.501920       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.703044    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1782d63db94f350b5edabaff3845d7885d001cd575956e68ea4ab801acefc5b"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.712344    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b4b5191649e7e23e89a07879b4f0adaac0597f0bf423d115837c82fc418492c"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.803666    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0eee6e67fe42b4371fb56c6ecb297d3e69a4cba74a5270a6664b8feaeae27e3"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.817656    1571 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-744000" podUID="4613d53e-c3b7-48eb-bb87-057beab671e7"
	Sep 17 17:24:20 ha-744000 kubelet[1571]: I0917 17:24:20.818111    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3888ce04e78dbb34e516e447734d2814db5be0d6808e1f32db4bbbdf86597bc4"
	Sep 17 17:24:24 ha-744000 kubelet[1571]: I0917 17:24:24.111673    1571 scope.go:117] "RemoveContainer" containerID="c938e7f2f1d48167ceab0b28c2510958f5ed8c527865274d730fb6a34c68d6fc"
	Sep 17 17:24:24 ha-744000 kubelet[1571]: E0917 17:24:24.159210    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:24:24 ha-744000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:24:24 ha-744000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:24:24 ha-744000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:24:24 ha-744000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:24:51 ha-744000 kubelet[1571]: I0917 17:24:51.193772    1571 scope.go:117] "RemoveContainer" containerID="7614d753e30b082bbb245659759587cc678073082201f9c648429b0e86eb7f3d"
	Sep 17 17:24:51 ha-744000 kubelet[1571]: I0917 17:24:51.193990    1571 scope.go:117] "RemoveContainer" containerID="8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2"
	Sep 17 17:24:51 ha-744000 kubelet[1571]: E0917 17:24:51.194071    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9c968c58-13fc-40ef-8098-3b66787272db)\"" pod="kube-system/storage-provisioner" podUID="9c968c58-13fc-40ef-8098-3b66787272db"
	Sep 17 17:25:06 ha-744000 kubelet[1571]: I0917 17:25:06.126176    1571 scope.go:117] "RemoveContainer" containerID="8fea3c0c8d014333c2e1d75d07273a12aeefb3fc38eb637e77ea4dd7f09a23d2"
	Sep 17 17:25:24 ha-744000 kubelet[1571]: E0917 17:25:24.147461    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:25:24 ha-744000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:25:24 ha-744000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:25:24 ha-744000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:25:24 ha-744000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:26:24 ha-744000 kubelet[1571]: E0917 17:26:24.146122    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:26:24 ha-744000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:26:24 ha-744000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:26:24 ha-744000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:26:24 ha-744000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-744000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-54lbj
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-744000 describe pod busybox-7dff88458-54lbj
helpers_test.go:282: (dbg) kubectl --context ha-744000 describe pod busybox-7dff88458-54lbj:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-54lbj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-khh7p (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-khh7p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  8s (x2 over 10s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (11.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (97.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-744000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-744000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (1m34.782015981s)

                                                
                                                
-- stdout --
	* [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	* Restarting existing hyperkit VM for "ha-744000" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	* Enabled addons: 
	
	* Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	* Restarting existing hyperkit VM for "ha-744000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:26:58.457695    4448 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:26:58.457869    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.457875    4448 out.go:358] Setting ErrFile to fd 2...
	I0917 10:26:58.457878    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.458048    4448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:26:58.459431    4448 out.go:352] Setting JSON to false
	I0917 10:26:58.481798    4448 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3385,"bootTime":1726590633,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:26:58.481949    4448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:26:58.503960    4448 out.go:177] * [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:26:58.546841    4448 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:26:58.546875    4448 notify.go:220] Checking for updates...
	I0917 10:26:58.589550    4448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:26:58.610683    4448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:26:58.631667    4448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:26:58.652583    4448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:26:58.673667    4448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:26:58.695561    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:26:58.696255    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.696327    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.705884    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52142
	I0917 10:26:58.706304    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.706746    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.706764    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.707014    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.707146    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.707350    4448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:26:58.707601    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.707628    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.716185    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 10:26:58.716537    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.716881    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.716897    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.717100    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.717222    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.745596    4448 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:26:58.787571    4448 start.go:297] selected driver: hyperkit
	I0917 10:26:58.787600    4448 start.go:901] validating driver "hyperkit" against &{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.787838    4448 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:26:58.788024    4448 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.788251    4448 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:26:58.797793    4448 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:26:58.801784    4448 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.801808    4448 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:26:58.804449    4448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:26:58.804489    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:26:58.804523    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:26:58.804589    4448 start.go:340] cluster config:
	{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.804704    4448 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.826385    4448 out.go:177] * Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	I0917 10:26:58.847617    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:26:58.847686    4448 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:26:58.847716    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:26:58.847928    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:26:58.847948    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:26:58.848103    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:58.849030    4448 start.go:360] acquireMachinesLock for ha-744000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:26:58.849203    4448 start.go:364] duration metric: took 147.892µs to acquireMachinesLock for "ha-744000"
	I0917 10:26:58.849244    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:26:58.849261    4448 fix.go:54] fixHost starting: 
	I0917 10:26:58.849685    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.849713    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.858847    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52146
	I0917 10:26:58.859214    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.859547    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.859558    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.859809    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.859941    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.860044    4448 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:26:58.860131    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.860222    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:26:58.861252    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.861281    4448 fix.go:112] recreateIfNeeded on ha-744000: state=Stopped err=<nil>
	I0917 10:26:58.861296    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	W0917 10:26:58.861379    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:26:58.903396    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000" ...
	I0917 10:26:58.924477    4448 main.go:141] libmachine: (ha-744000) Calling .Start
	I0917 10:26:58.924739    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.924805    4448 main.go:141] libmachine: (ha-744000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid
	I0917 10:26:58.926818    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.926830    4448 main.go:141] libmachine: (ha-744000) DBG | pid 4331 is in state "Stopped"
	I0917 10:26:58.926844    4448 main.go:141] libmachine: (ha-744000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid...
	I0917 10:26:58.927183    4448 main.go:141] libmachine: (ha-744000) DBG | Using UUID bcb5b96f-4d12-41bd-81db-c015832629bb
	I0917 10:26:59.037116    4448 main.go:141] libmachine: (ha-744000) DBG | Generated MAC 36:e3:93:ff:24:96
	I0917 10:26:59.037141    4448 main.go:141] libmachine: (ha-744000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:26:59.037239    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037264    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037302    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bcb5b96f-4d12-41bd-81db-c015832629bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:26:59.037345    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bcb5b96f-4d12-41bd-81db-c015832629bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:26:59.037367    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:26:59.039007    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Pid is 4462
	I0917 10:26:59.039387    4448 main.go:141] libmachine: (ha-744000) DBG | Attempt 0
	I0917 10:26:59.039405    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:59.039460    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4462
	I0917 10:26:59.040899    4448 main.go:141] libmachine: (ha-744000) DBG | Searching for 36:e3:93:ff:24:96 in /var/db/dhcpd_leases ...
	I0917 10:26:59.040968    4448 main.go:141] libmachine: (ha-744000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:26:59.040982    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:26:59.040991    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:26:59.041010    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:26:59.041033    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:26:59.041040    4448 main.go:141] libmachine: (ha-744000) DBG | Found match: 36:e3:93:ff:24:96
	I0917 10:26:59.041046    4448 main.go:141] libmachine: (ha-744000) DBG | IP: 192.169.0.5
	I0917 10:26:59.041079    4448 main.go:141] libmachine: (ha-744000) Calling .GetConfigRaw
	I0917 10:26:59.041673    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:26:59.041837    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:59.042200    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:26:59.042209    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:59.042313    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:26:59.042393    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:26:59.042497    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042594    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:26:59.042817    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:26:59.043033    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:26:59.043044    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:26:59.047101    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:26:59.098991    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:26:59.099689    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.099714    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.099723    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.099730    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.478495    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:26:59.478510    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:26:59.593167    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.593183    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.593195    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.593203    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.594075    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:26:59.594086    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:05.183473    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:05.183540    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:05.183555    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:05.208169    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:10.113996    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:10.114014    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114152    4448 buildroot.go:166] provisioning hostname "ha-744000"
	I0917 10:27:10.114163    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114266    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.114402    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.114494    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114584    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.114812    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.114997    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.115005    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000 && echo "ha-744000" | sudo tee /etc/hostname
	I0917 10:27:10.189969    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000
	
	I0917 10:27:10.189985    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.190121    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.190233    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190324    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190425    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.190562    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.190707    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.190718    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:10.253511    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:10.253531    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:10.253549    4448 buildroot.go:174] setting up certificates
	I0917 10:27:10.253555    4448 provision.go:84] configureAuth start
	I0917 10:27:10.253563    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.253694    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:10.253790    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.253930    4448 provision.go:143] copyHostCerts
	I0917 10:27:10.253971    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254039    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:10.254046    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254180    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:10.254370    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254409    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:10.254414    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254534    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:10.254684    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254722    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:10.254727    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254807    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:10.254980    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000 san=[127.0.0.1 192.169.0.5 ha-744000 localhost minikube]
	I0917 10:27:10.443647    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:10.443709    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:10.443745    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.444017    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.444217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.444311    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.444408    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:10.481724    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:10.481797    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:10.501694    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:10.501755    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 10:27:10.521451    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:10.521514    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 10:27:10.541883    4448 provision.go:87] duration metric: took 288.31459ms to configureAuth
	I0917 10:27:10.541895    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:10.542067    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:10.542085    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:10.542217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.542312    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.542387    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542467    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542559    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.542679    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.542806    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.542813    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:10.601508    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:10.601520    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:10.601615    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:10.601630    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.601764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.601865    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.601953    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.602043    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.602200    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.602343    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.602386    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:10.669944    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:10.669969    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.670102    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.670200    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670294    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670389    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.670510    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.670646    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.670658    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:12.369424    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:12.369438    4448 machine.go:96] duration metric: took 13.32714724s to provisionDockerMachine
	I0917 10:27:12.369451    4448 start.go:293] postStartSetup for "ha-744000" (driver="hyperkit")
	I0917 10:27:12.369463    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:12.369473    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.369675    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:12.369692    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.369803    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.369884    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.369975    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.370067    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.413317    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:12.417238    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:12.417272    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:12.417380    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:12.417569    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:12.417576    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:12.417788    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:12.427707    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:12.461431    4448 start.go:296] duration metric: took 91.970306ms for postStartSetup
	I0917 10:27:12.461460    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.461662    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:12.461675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.461764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.461863    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.461951    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.462049    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.498975    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:12.499039    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:12.553785    4448 fix.go:56] duration metric: took 13.704442272s for fixHost
	I0917 10:27:12.553808    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.553948    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.554064    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554158    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554243    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.554376    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:12.554528    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:12.554535    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:12.611703    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594032.650749132
	
	I0917 10:27:12.611715    4448 fix.go:216] guest clock: 1726594032.650749132
	I0917 10:27:12.611721    4448 fix.go:229] Guest: 2024-09-17 10:27:12.650749132 -0700 PDT Remote: 2024-09-17 10:27:12.553798 -0700 PDT m=+14.131667372 (delta=96.951132ms)
	I0917 10:27:12.611739    4448 fix.go:200] guest clock delta is within tolerance: 96.951132ms
	I0917 10:27:12.611750    4448 start.go:83] releasing machines lock for "ha-744000", held for 13.76244446s
	I0917 10:27:12.611768    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.611894    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:12.611995    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612340    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612438    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612522    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:12.612557    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612569    4448 ssh_runner.go:195] Run: cat /version.json
	I0917 10:27:12.612585    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612694    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612758    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612775    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612845    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612893    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612945    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.612977    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.648784    4448 ssh_runner.go:195] Run: systemctl --version
	I0917 10:27:12.693591    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:27:12.698718    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:12.698762    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:12.712125    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:12.712136    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.712235    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:12.730012    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:12.739057    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:12.747889    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:12.747935    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:12.757003    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.765797    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:12.774517    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.783400    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:12.792355    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:12.801214    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:12.810043    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:12.818991    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:12.826988    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:12.835075    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:12.932332    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:12.951203    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.951306    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:12.965837    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:12.981143    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:12.997816    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:13.008834    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.019726    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:13.047621    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.057914    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:13.072731    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:13.075778    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:13.083057    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:13.096420    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:13.190446    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:13.291417    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:13.291479    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:13.305208    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:13.405566    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:27:15.763788    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.358187677s)
	I0917 10:27:15.763854    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:27:15.774266    4448 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:27:15.786987    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:15.797461    4448 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:27:15.892958    4448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:27:15.992563    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.099704    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:27:16.113167    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:16.123851    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.230595    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:27:16.294806    4448 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:27:16.294898    4448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:27:16.300863    4448 start.go:563] Will wait 60s for crictl version
	I0917 10:27:16.300922    4448 ssh_runner.go:195] Run: which crictl
	I0917 10:27:16.304010    4448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:27:16.329606    4448 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:27:16.329710    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.346052    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.386748    4448 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:27:16.386784    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:16.387136    4448 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:27:16.390752    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.401571    4448 kubeadm.go:883] updating cluster {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 10:27:16.401664    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:16.401736    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.415872    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.415884    4448 docker.go:615] Images already preloaded, skipping extraction
	I0917 10:27:16.415970    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.427730    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.427747    4448 cache_images.go:84] Images are preloaded, skipping loading
	I0917 10:27:16.427754    4448 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 10:27:16.427829    4448 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:27:16.427915    4448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:27:16.463597    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:27:16.463611    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:27:16.463624    4448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:27:16.463640    4448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-744000 NodeName:ha-744000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:27:16.463730    4448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-744000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:27:16.463744    4448 kube-vip.go:115] generating kube-vip config ...
	I0917 10:27:16.463801    4448 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:27:16.478021    4448 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:27:16.478094    4448 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:27:16.478153    4448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:27:16.486558    4448 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:27:16.486616    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 10:27:16.494493    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 10:27:16.507997    4448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:27:16.521295    4448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 10:27:16.535199    4448 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:27:16.548668    4448 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:27:16.551530    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.561441    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.669349    4448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:27:16.684528    4448 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.5
	I0917 10:27:16.684541    4448 certs.go:194] generating shared ca certs ...
	I0917 10:27:16.684551    4448 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.684731    4448 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:27:16.684804    4448 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:27:16.684814    4448 certs.go:256] generating profile certs ...
	I0917 10:27:16.684905    4448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:27:16.684929    4448 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437
	I0917 10:27:16.684945    4448 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0917 10:27:16.754039    4448 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 ...
	I0917 10:27:16.754056    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437: {Name:mk79438fdb4dc3d525e8f682359147c957173c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754456    4448 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 ...
	I0917 10:27:16.754466    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437: {Name:mk6d911cd96357b3c3159c4d3a41f23afb7d4c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754680    4448 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt
	I0917 10:27:16.754895    4448 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key
	I0917 10:27:16.755149    4448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:27:16.755158    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:27:16.755205    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:27:16.755227    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:27:16.755246    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:27:16.755264    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:27:16.755283    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:27:16.755301    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:27:16.755318    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:27:16.755412    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:27:16.755459    4448 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:27:16.755467    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:27:16.755497    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:27:16.755530    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:27:16.755558    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:27:16.755623    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:16.755655    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:16.755675    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:27:16.755693    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:27:16.756123    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:27:16.777874    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:27:16.799280    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:27:16.827224    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:27:16.853838    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 10:27:16.907328    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:27:16.953101    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:27:16.997682    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:27:17.038330    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:27:17.061602    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:27:17.092949    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:27:17.123494    4448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:27:17.140334    4448 ssh_runner.go:195] Run: openssl version
	I0917 10:27:17.145978    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:27:17.156986    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161699    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161756    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.170341    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:27:17.187142    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:27:17.201375    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204789    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204832    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.208961    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:27:17.218128    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:27:17.227213    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230513    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230553    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.234703    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:27:17.243926    4448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:27:17.247354    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:27:17.251674    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:27:17.256090    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:27:17.260499    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:27:17.264702    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:27:17.268923    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:27:17.273119    4448 kubeadm.go:392] StartCluster: {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:27:17.273252    4448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:27:17.284758    4448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:27:17.293284    4448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:27:17.293296    4448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:27:17.293343    4448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:27:17.301434    4448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:27:17.301756    4448 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-744000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.301839    4448 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1558/kubeconfig needs updating (will repair): [kubeconfig missing "ha-744000" cluster setting kubeconfig missing "ha-744000" context setting]
	I0917 10:27:17.302016    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.302656    4448 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.302866    4448 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x4ad2720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:27:17.303186    4448 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 10:27:17.303370    4448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:27:17.311395    4448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 10:27:17.311410    4448 kubeadm.go:597] duration metric: took 18.109722ms to restartPrimaryControlPlane
	I0917 10:27:17.311416    4448 kubeadm.go:394] duration metric: took 38.30313ms to StartCluster
	I0917 10:27:17.311425    4448 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.311502    4448 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.311847    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.312074    4448 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:27:17.312086    4448 start.go:241] waiting for startup goroutines ...
	I0917 10:27:17.312098    4448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:27:17.312209    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.356558    4448 out.go:177] * Enabled addons: 
	I0917 10:27:17.377453    4448 addons.go:510] duration metric: took 65.359314ms for enable addons: enabled=[]
	I0917 10:27:17.377491    4448 start.go:246] waiting for cluster config update ...
	I0917 10:27:17.377508    4448 start.go:255] writing updated cluster config ...
	I0917 10:27:17.399517    4448 out.go:201] 
	I0917 10:27:17.421006    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.421153    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.443394    4448 out.go:177] * Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	I0917 10:27:17.485722    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:17.485786    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:27:17.485968    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:27:17.485986    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:27:17.486112    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.487099    4448 start.go:360] acquireMachinesLock for ha-744000-m02: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:27:17.487205    4448 start.go:364] duration metric: took 81.172µs to acquireMachinesLock for "ha-744000-m02"
	I0917 10:27:17.487235    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:27:17.487243    4448 fix.go:54] fixHost starting: m02
	I0917 10:27:17.487683    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:27:17.487720    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:27:17.497503    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52168
	I0917 10:27:17.498037    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:27:17.498462    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:27:17.498477    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:27:17.498776    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:27:17.499011    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.499112    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:27:17.499198    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.499265    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:27:17.500274    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.500290    4448 fix.go:112] recreateIfNeeded on ha-744000-m02: state=Stopped err=<nil>
	I0917 10:27:17.500304    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	W0917 10:27:17.500387    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:27:17.542418    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m02" ...
	I0917 10:27:17.563504    4448 main.go:141] libmachine: (ha-744000-m02) Calling .Start
	I0917 10:27:17.563707    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.563730    4448 main.go:141] libmachine: (ha-744000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid
	I0917 10:27:17.564875    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.564887    4448 main.go:141] libmachine: (ha-744000-m02) DBG | pid 4339 is in state "Stopped"
	I0917 10:27:17.564903    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid...
	I0917 10:27:17.565097    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Using UUID 84417734-d0f3-4fed-a88c-11fa06a6299e
	I0917 10:27:17.591233    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Generated MAC 72:92:6:7e:7d:92
	I0917 10:27:17.591269    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:27:17.591443    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591484    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591541    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "84417734-d0f3-4fed-a88c-11fa06a6299e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:27:17.591573    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 84417734-d0f3-4fed-a88c-11fa06a6299e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:27:17.591591    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:27:17.592872    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Pid is 4469
	I0917 10:27:17.593367    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Attempt 0
	I0917 10:27:17.593378    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.593408    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4469
	I0917 10:27:17.595062    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Searching for 72:92:6:7e:7d:92 in /var/db/dhcpd_leases ...
	I0917 10:27:17.595127    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:27:17.595146    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0d6c}
	I0917 10:27:17.595182    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:27:17.595200    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:27:17.595210    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetConfigRaw
	I0917 10:27:17.595213    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:27:17.595230    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found match: 72:92:6:7e:7d:92
	I0917 10:27:17.595241    4448 main.go:141] libmachine: (ha-744000-m02) DBG | IP: 192.169.0.6
	I0917 10:27:17.595879    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:17.596065    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.596597    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:27:17.596609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.596723    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:17.596804    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:17.596890    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597096    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:17.597227    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:17.597374    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:17.597383    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:27:17.600658    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:27:17.609248    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:27:17.610115    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:17.610129    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:17.610159    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:17.610179    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:17.995972    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:27:17.995987    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:27:18.110623    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:18.110642    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:18.110651    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:18.110657    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:18.111459    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:27:18.111468    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:23.703289    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:23.703415    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:23.703428    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:23.727083    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:28.668165    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:28.668207    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668348    4448 buildroot.go:166] provisioning hostname "ha-744000-m02"
	I0917 10:27:28.668359    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668445    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.668533    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.668618    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.668945    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.669097    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.669106    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m02 && echo "ha-744000-m02" | sudo tee /etc/hostname
	I0917 10:27:28.749259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m02
	
	I0917 10:27:28.749274    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.749405    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.749513    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749700    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.749847    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.749994    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.750009    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:28.821499    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:28.821514    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:28.821523    4448 buildroot.go:174] setting up certificates
	I0917 10:27:28.821528    4448 provision.go:84] configureAuth start
	I0917 10:27:28.821534    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.821669    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:28.821789    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.821885    4448 provision.go:143] copyHostCerts
	I0917 10:27:28.821910    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.821968    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:28.821973    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.822114    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:28.822315    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822354    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:28.822366    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822450    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:28.822596    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822635    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:28.822639    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822717    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:28.822857    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m02 san=[127.0.0.1 192.169.0.6 ha-744000-m02 localhost minikube]
	I0917 10:27:28.955024    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:28.955079    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:28.955094    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.955239    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.955341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.955430    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.955526    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:28.994909    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:28.994978    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:27:29.014096    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:29.014170    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:27:29.033197    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:29.033261    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:29.052129    4448 provision.go:87] duration metric: took 230.592645ms to configureAuth
	I0917 10:27:29.052147    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:29.052322    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:29.052336    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:29.052473    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.052573    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.052670    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052755    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052827    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.052942    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.053069    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.053076    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:29.116259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:29.116272    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:29.116365    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:29.116377    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.116506    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.116595    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116715    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116793    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.116936    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.117075    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.117118    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:29.192146    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:29.192170    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.192303    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.192391    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192497    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192577    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.192705    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.192844    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.192856    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:30.870717    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:30.870732    4448 machine.go:96] duration metric: took 13.274043119s to provisionDockerMachine
	I0917 10:27:30.870747    4448 start.go:293] postStartSetup for "ha-744000-m02" (driver="hyperkit")
	I0917 10:27:30.870755    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:30.870766    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.870980    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:30.870994    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.871125    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.871248    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.871341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.871432    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.914708    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:30.918099    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:30.918113    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:30.918212    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:30.918387    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:30.918394    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:30.918605    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:30.929083    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:30.958117    4448 start.go:296] duration metric: took 87.359751ms for postStartSetup
	I0917 10:27:30.958138    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.958316    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:30.958328    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.958426    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.958518    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.958597    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.958669    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.998754    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:30.998827    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:31.054686    4448 fix.go:56] duration metric: took 13.567353836s for fixHost
	I0917 10:27:31.054713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.054850    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.054939    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055014    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055085    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.055233    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:31.055380    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:31.055386    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:31.119216    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594051.159133703
	
	I0917 10:27:31.119227    4448 fix.go:216] guest clock: 1726594051.159133703
	I0917 10:27:31.119235    4448 fix.go:229] Guest: 2024-09-17 10:27:31.159133703 -0700 PDT Remote: 2024-09-17 10:27:31.054702 -0700 PDT m=+32.632454337 (delta=104.431703ms)
	I0917 10:27:31.119246    4448 fix.go:200] guest clock delta is within tolerance: 104.431703ms
	I0917 10:27:31.119250    4448 start.go:83] releasing machines lock for "ha-744000-m02", held for 13.631947572s
	I0917 10:27:31.119267    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.119393    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:31.143966    4448 out.go:177] * Found network options:
	I0917 10:27:31.164924    4448 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 10:27:31.185989    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.186029    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.186884    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187158    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187319    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:31.187368    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	W0917 10:27:31.187382    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.187491    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:27:31.187550    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.187616    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187796    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.187813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187986    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.188154    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188197    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:31.188284    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:27:31.224656    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:31.224727    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:31.272646    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:31.272663    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.272743    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.288486    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:31.297401    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:31.306736    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.306808    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:31.316018    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.325058    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:31.334512    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.343837    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:31.353242    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:31.362032    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:31.371387    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:31.380261    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:31.388512    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:31.396778    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.496690    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:31.515568    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.515642    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:31.540737    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.552945    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:31.572641    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.584129    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.595235    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:31.619571    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.631020    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.646195    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:31.649235    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:31.657206    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:31.670819    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:31.769091    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:31.876805    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.876827    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:31.890932    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.985803    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:28:33.019399    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.033193508s)
	I0917 10:28:33.019489    4448 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:28:33.055431    4448 out.go:201] 
	W0917 10:28:33.077249    4448 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:27:29 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.538749787Z" level=info msg="Starting up"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.539378325Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.541084999Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=490
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.558457504Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573199339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573220908Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573258162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573299725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573411020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573446242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573553666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573587921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573599847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573607195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573685739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573880273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575404717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575443775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575555494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575590640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575719071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575763589Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.577951289Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578038703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578076919Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578089302Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578157091Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578202689Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580641100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580726566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580738845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580747690Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580756580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580765114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580772643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580781164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580790542Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580798635Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580806480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580814346Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580832655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580847752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580858242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580866931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580879634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580890299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580898230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580906575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580914939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580923943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580931177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580940500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580948337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580963023Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580980668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580989498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580996636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581056206Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581091289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581104079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581113194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581120030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581133102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581145706Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581334956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581407817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581460834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581473448Z" level=info msg="containerd successfully booted in 0.023887s"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.569483774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.598149093Z" level=info msg="Loading containers: start."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.772640000Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.832682998Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.874141710Z" level=info msg="Loading containers: done."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885048604Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885231945Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907500544Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907671752Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:27:30 ha-744000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.038076014Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039237554Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:27:32 ha-744000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039672384Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039926596Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039966362Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:33 ha-744000-m02 dockerd[1165]: time="2024-09-17T17:27:33.083664420Z" level=info msg="Starting up"
	Sep 17 17:28:33 ha-744000-m02 dockerd[1165]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:27:29 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.538749787Z" level=info msg="Starting up"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.539378325Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.541084999Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=490
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.558457504Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573199339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573220908Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573258162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573299725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573411020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573446242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573553666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573587921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573599847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573607195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573685739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573880273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575404717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575443775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575555494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575590640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575719071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575763589Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.577951289Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578038703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578076919Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578089302Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578157091Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578202689Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580641100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580726566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580738845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580747690Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580756580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580765114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580772643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580781164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580790542Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580798635Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580806480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580814346Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580832655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580847752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580858242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580866931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580879634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580890299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580898230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580906575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580914939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580923943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580931177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580940500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580948337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580963023Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580980668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580989498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580996636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581056206Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581091289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581104079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581113194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581120030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581133102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581145706Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581334956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581407817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581460834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581473448Z" level=info msg="containerd successfully booted in 0.023887s"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.569483774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.598149093Z" level=info msg="Loading containers: start."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.772640000Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.832682998Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.874141710Z" level=info msg="Loading containers: done."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885048604Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885231945Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907500544Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907671752Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:27:30 ha-744000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.038076014Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039237554Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:27:32 ha-744000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039672384Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039926596Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039966362Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:33 ha-744000-m02 dockerd[1165]: time="2024-09-17T17:27:33.083664420Z" level=info msg="Starting up"
	Sep 17 17:28:33 ha-744000-m02 dockerd[1165]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:28:33.077325    4448 out.go:270] * 
	* 
	W0917 10:28:33.078575    4448 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:28:33.141292    4448 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-744000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000: exit status 2 (146.351904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 logs -n 25: (2.194895436s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m04 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp testdata/cp-test.txt                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000:/home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000 sudo cat                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m02:/home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m02 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03:/home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m03 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-744000 node stop m02 -v=7                                                                                                 | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-744000 node start m02 -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:22 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000 -v=7                                                                                                       | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-744000 -v=7                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT | 17 Sep 24 10:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:23 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	| node    | ha-744000 node delete m03 -v=7                                                                                               | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-744000 stop -v=7                                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true                                                                                                     | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 10:26:58
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 10:26:58.457695    4448 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:26:58.457869    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.457875    4448 out.go:358] Setting ErrFile to fd 2...
	I0917 10:26:58.457878    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.458048    4448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:26:58.459431    4448 out.go:352] Setting JSON to false
	I0917 10:26:58.481798    4448 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3385,"bootTime":1726590633,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:26:58.481949    4448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:26:58.503960    4448 out.go:177] * [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:26:58.546841    4448 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:26:58.546875    4448 notify.go:220] Checking for updates...
	I0917 10:26:58.589550    4448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:26:58.610683    4448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:26:58.631667    4448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:26:58.652583    4448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:26:58.673667    4448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:26:58.695561    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:26:58.696255    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.696327    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.705884    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52142
	I0917 10:26:58.706304    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.706746    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.706764    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.707014    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.707146    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.707350    4448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:26:58.707601    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.707628    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.716185    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 10:26:58.716537    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.716881    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.716897    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.717100    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.717222    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.745596    4448 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:26:58.787571    4448 start.go:297] selected driver: hyperkit
	I0917 10:26:58.787600    4448 start.go:901] validating driver "hyperkit" against &{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.787838    4448 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:26:58.788024    4448 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.788251    4448 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:26:58.797793    4448 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:26:58.801784    4448 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.801808    4448 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:26:58.804449    4448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:26:58.804489    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:26:58.804523    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:26:58.804589    4448 start.go:340] cluster config:
	{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.804704    4448 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.826385    4448 out.go:177] * Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	I0917 10:26:58.847617    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:26:58.847686    4448 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:26:58.847716    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:26:58.847928    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:26:58.847948    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:26:58.848103    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:58.849030    4448 start.go:360] acquireMachinesLock for ha-744000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:26:58.849203    4448 start.go:364] duration metric: took 147.892µs to acquireMachinesLock for "ha-744000"
	I0917 10:26:58.849244    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:26:58.849261    4448 fix.go:54] fixHost starting: 
	I0917 10:26:58.849685    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.849713    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.858847    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52146
	I0917 10:26:58.859214    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.859547    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.859558    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.859809    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.859941    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.860044    4448 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:26:58.860131    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.860222    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:26:58.861252    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.861281    4448 fix.go:112] recreateIfNeeded on ha-744000: state=Stopped err=<nil>
	I0917 10:26:58.861296    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	W0917 10:26:58.861379    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:26:58.903396    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000" ...
	I0917 10:26:58.924477    4448 main.go:141] libmachine: (ha-744000) Calling .Start
	I0917 10:26:58.924739    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.924805    4448 main.go:141] libmachine: (ha-744000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid
	I0917 10:26:58.926818    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.926830    4448 main.go:141] libmachine: (ha-744000) DBG | pid 4331 is in state "Stopped"
	I0917 10:26:58.926844    4448 main.go:141] libmachine: (ha-744000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid...
	I0917 10:26:58.927183    4448 main.go:141] libmachine: (ha-744000) DBG | Using UUID bcb5b96f-4d12-41bd-81db-c015832629bb
	I0917 10:26:59.037116    4448 main.go:141] libmachine: (ha-744000) DBG | Generated MAC 36:e3:93:ff:24:96
	I0917 10:26:59.037141    4448 main.go:141] libmachine: (ha-744000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:26:59.037239    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037264    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037302    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bcb5b96f-4d12-41bd-81db-c015832629bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:26:59.037345    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bcb5b96f-4d12-41bd-81db-c015832629bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:26:59.037367    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:26:59.039007    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Pid is 4462
	I0917 10:26:59.039387    4448 main.go:141] libmachine: (ha-744000) DBG | Attempt 0
	I0917 10:26:59.039405    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:59.039460    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4462
	I0917 10:26:59.040899    4448 main.go:141] libmachine: (ha-744000) DBG | Searching for 36:e3:93:ff:24:96 in /var/db/dhcpd_leases ...
	I0917 10:26:59.040968    4448 main.go:141] libmachine: (ha-744000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:26:59.040982    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:26:59.040991    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:26:59.041010    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:26:59.041033    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:26:59.041040    4448 main.go:141] libmachine: (ha-744000) DBG | Found match: 36:e3:93:ff:24:96
	I0917 10:26:59.041046    4448 main.go:141] libmachine: (ha-744000) DBG | IP: 192.169.0.5
	I0917 10:26:59.041079    4448 main.go:141] libmachine: (ha-744000) Calling .GetConfigRaw
	I0917 10:26:59.041673    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:26:59.041837    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:59.042200    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:26:59.042209    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:59.042313    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:26:59.042393    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:26:59.042497    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042594    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:26:59.042817    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:26:59.043033    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:26:59.043044    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:26:59.047101    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:26:59.098991    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:26:59.099689    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.099714    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.099723    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.099730    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.478495    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:26:59.478510    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:26:59.593167    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.593183    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.593195    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.593203    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.594075    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:26:59.594086    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:05.183473    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:05.183540    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:05.183555    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:05.208169    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:10.113996    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:10.114014    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114152    4448 buildroot.go:166] provisioning hostname "ha-744000"
	I0917 10:27:10.114163    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114266    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.114402    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.114494    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114584    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.114812    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.114997    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.115005    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000 && echo "ha-744000" | sudo tee /etc/hostname
	I0917 10:27:10.189969    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000
	
	I0917 10:27:10.189985    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.190121    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.190233    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190324    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190425    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.190562    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.190707    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.190718    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:10.253511    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:10.253531    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:10.253549    4448 buildroot.go:174] setting up certificates
	I0917 10:27:10.253555    4448 provision.go:84] configureAuth start
	I0917 10:27:10.253563    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.253694    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:10.253790    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.253930    4448 provision.go:143] copyHostCerts
	I0917 10:27:10.253971    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254039    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:10.254046    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254180    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:10.254370    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254409    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:10.254414    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254534    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:10.254684    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254722    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:10.254727    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254807    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:10.254980    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000 san=[127.0.0.1 192.169.0.5 ha-744000 localhost minikube]
	I0917 10:27:10.443647    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:10.443709    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:10.443745    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.444017    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.444217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.444311    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.444408    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:10.481724    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:10.481797    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:10.501694    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:10.501755    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 10:27:10.521451    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:10.521514    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 10:27:10.541883    4448 provision.go:87] duration metric: took 288.31459ms to configureAuth
	I0917 10:27:10.541895    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:10.542067    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:10.542085    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:10.542217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.542312    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.542387    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542467    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542559    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.542679    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.542806    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.542813    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:10.601508    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:10.601520    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:10.601615    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:10.601630    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.601764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.601865    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.601953    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.602043    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.602200    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.602343    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.602386    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:10.669944    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:10.669969    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.670102    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.670200    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670294    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670389    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.670510    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.670646    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.670658    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:12.369424    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:12.369438    4448 machine.go:96] duration metric: took 13.32714724s to provisionDockerMachine
	I0917 10:27:12.369451    4448 start.go:293] postStartSetup for "ha-744000" (driver="hyperkit")
	I0917 10:27:12.369463    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:12.369473    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.369675    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:12.369692    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.369803    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.369884    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.369975    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.370067    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.413317    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:12.417238    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:12.417272    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:12.417380    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:12.417569    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:12.417576    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:12.417788    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:12.427707    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:12.461431    4448 start.go:296] duration metric: took 91.970306ms for postStartSetup
	I0917 10:27:12.461460    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.461662    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:12.461675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.461764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.461863    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.461951    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.462049    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.498975    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:12.499039    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:12.553785    4448 fix.go:56] duration metric: took 13.704442272s for fixHost
	I0917 10:27:12.553808    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.553948    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.554064    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554158    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554243    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.554376    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:12.554528    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:12.554535    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:12.611703    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594032.650749132
	
	I0917 10:27:12.611715    4448 fix.go:216] guest clock: 1726594032.650749132
	I0917 10:27:12.611721    4448 fix.go:229] Guest: 2024-09-17 10:27:12.650749132 -0700 PDT Remote: 2024-09-17 10:27:12.553798 -0700 PDT m=+14.131667372 (delta=96.951132ms)
	I0917 10:27:12.611739    4448 fix.go:200] guest clock delta is within tolerance: 96.951132ms
	I0917 10:27:12.611750    4448 start.go:83] releasing machines lock for "ha-744000", held for 13.76244446s
	I0917 10:27:12.611768    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.611894    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:12.611995    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612340    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612438    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612522    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:12.612557    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612569    4448 ssh_runner.go:195] Run: cat /version.json
	I0917 10:27:12.612585    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612694    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612758    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612775    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612845    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612893    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612945    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.612977    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.648784    4448 ssh_runner.go:195] Run: systemctl --version
	I0917 10:27:12.693591    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:27:12.698718    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:12.698762    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:12.712125    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:12.712136    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.712235    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:12.730012    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:12.739057    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:12.747889    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:12.747935    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:12.757003    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.765797    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:12.774517    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.783400    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:12.792355    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:12.801214    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:12.810043    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:12.818991    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:12.826988    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:12.835075    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:12.932332    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:12.951203    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.951306    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:12.965837    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:12.981143    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:12.997816    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:13.008834    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.019726    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:13.047621    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.057914    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:13.072731    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:13.075778    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:13.083057    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:13.096420    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:13.190446    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:13.291417    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:13.291479    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:13.305208    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:13.405566    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:27:15.763788    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.358187677s)
	I0917 10:27:15.763854    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:27:15.774266    4448 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:27:15.786987    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:15.797461    4448 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:27:15.892958    4448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:27:15.992563    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.099704    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:27:16.113167    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:16.123851    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.230595    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:27:16.294806    4448 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:27:16.294898    4448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:27:16.300863    4448 start.go:563] Will wait 60s for crictl version
	I0917 10:27:16.300922    4448 ssh_runner.go:195] Run: which crictl
	I0917 10:27:16.304010    4448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:27:16.329606    4448 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:27:16.329710    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.346052    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.386748    4448 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:27:16.386784    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:16.387136    4448 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:27:16.390752    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.401571    4448 kubeadm.go:883] updating cluster {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 10:27:16.401664    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:16.401736    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.415872    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.415884    4448 docker.go:615] Images already preloaded, skipping extraction
	I0917 10:27:16.415970    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.427730    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.427747    4448 cache_images.go:84] Images are preloaded, skipping loading
	I0917 10:27:16.427754    4448 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 10:27:16.427829    4448 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:27:16.427915    4448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:27:16.463597    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:27:16.463611    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:27:16.463624    4448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:27:16.463640    4448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-744000 NodeName:ha-744000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:27:16.463730    4448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-744000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:27:16.463744    4448 kube-vip.go:115] generating kube-vip config ...
	I0917 10:27:16.463801    4448 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:27:16.478021    4448 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:27:16.478094    4448 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:27:16.478153    4448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:27:16.486558    4448 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:27:16.486616    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 10:27:16.494493    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 10:27:16.507997    4448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:27:16.521295    4448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 10:27:16.535199    4448 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:27:16.548668    4448 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:27:16.551530    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.561441    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.669349    4448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:27:16.684528    4448 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.5
	I0917 10:27:16.684541    4448 certs.go:194] generating shared ca certs ...
	I0917 10:27:16.684551    4448 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.684731    4448 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:27:16.684804    4448 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:27:16.684814    4448 certs.go:256] generating profile certs ...
	I0917 10:27:16.684905    4448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:27:16.684929    4448 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437
	I0917 10:27:16.684945    4448 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0917 10:27:16.754039    4448 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 ...
	I0917 10:27:16.754056    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437: {Name:mk79438fdb4dc3d525e8f682359147c957173c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754456    4448 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 ...
	I0917 10:27:16.754466    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437: {Name:mk6d911cd96357b3c3159c4d3a41f23afb7d4c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754680    4448 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt
	I0917 10:27:16.754895    4448 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key
	I0917 10:27:16.755149    4448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:27:16.755158    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:27:16.755205    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:27:16.755227    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:27:16.755246    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:27:16.755264    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:27:16.755283    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:27:16.755301    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:27:16.755318    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:27:16.755412    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:27:16.755459    4448 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:27:16.755467    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:27:16.755497    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:27:16.755530    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:27:16.755558    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:27:16.755623    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:16.755655    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:16.755675    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:27:16.755693    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:27:16.756123    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:27:16.777874    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:27:16.799280    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:27:16.827224    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:27:16.853838    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 10:27:16.907328    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:27:16.953101    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:27:16.997682    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:27:17.038330    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:27:17.061602    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:27:17.092949    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:27:17.123494    4448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:27:17.140334    4448 ssh_runner.go:195] Run: openssl version
	I0917 10:27:17.145978    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:27:17.156986    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161699    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161756    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.170341    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:27:17.187142    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:27:17.201375    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204789    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204832    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.208961    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:27:17.218128    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:27:17.227213    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230513    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230553    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.234703    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:27:17.243926    4448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:27:17.247354    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:27:17.251674    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:27:17.256090    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:27:17.260499    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:27:17.264702    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:27:17.268923    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:27:17.273119    4448 kubeadm.go:392] StartCluster: {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:27:17.273252    4448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:27:17.284758    4448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:27:17.293284    4448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:27:17.293296    4448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:27:17.293343    4448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:27:17.301434    4448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:27:17.301756    4448 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-744000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.301839    4448 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1558/kubeconfig needs updating (will repair): [kubeconfig missing "ha-744000" cluster setting kubeconfig missing "ha-744000" context setting]
	I0917 10:27:17.302016    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.302656    4448 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.302866    4448 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x4ad2720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:27:17.303186    4448 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 10:27:17.303370    4448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:27:17.311395    4448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 10:27:17.311410    4448 kubeadm.go:597] duration metric: took 18.109722ms to restartPrimaryControlPlane
	I0917 10:27:17.311416    4448 kubeadm.go:394] duration metric: took 38.30313ms to StartCluster
	I0917 10:27:17.311425    4448 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.311502    4448 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.311847    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.312074    4448 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:27:17.312086    4448 start.go:241] waiting for startup goroutines ...
	I0917 10:27:17.312098    4448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:27:17.312209    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.356558    4448 out.go:177] * Enabled addons: 
	I0917 10:27:17.377453    4448 addons.go:510] duration metric: took 65.359314ms for enable addons: enabled=[]
	I0917 10:27:17.377491    4448 start.go:246] waiting for cluster config update ...
	I0917 10:27:17.377508    4448 start.go:255] writing updated cluster config ...
	I0917 10:27:17.399517    4448 out.go:201] 
	I0917 10:27:17.421006    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.421153    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.443394    4448 out.go:177] * Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	I0917 10:27:17.485722    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:17.485786    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:27:17.485968    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:27:17.485986    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:27:17.486112    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.487099    4448 start.go:360] acquireMachinesLock for ha-744000-m02: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:27:17.487205    4448 start.go:364] duration metric: took 81.172µs to acquireMachinesLock for "ha-744000-m02"
	I0917 10:27:17.487235    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:27:17.487243    4448 fix.go:54] fixHost starting: m02
	I0917 10:27:17.487683    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:27:17.487720    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:27:17.497503    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52168
	I0917 10:27:17.498037    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:27:17.498462    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:27:17.498477    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:27:17.498776    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:27:17.499011    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.499112    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:27:17.499198    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.499265    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:27:17.500274    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.500290    4448 fix.go:112] recreateIfNeeded on ha-744000-m02: state=Stopped err=<nil>
	I0917 10:27:17.500304    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	W0917 10:27:17.500387    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:27:17.542418    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m02" ...
	I0917 10:27:17.563504    4448 main.go:141] libmachine: (ha-744000-m02) Calling .Start
	I0917 10:27:17.563707    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.563730    4448 main.go:141] libmachine: (ha-744000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid
	I0917 10:27:17.564875    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.564887    4448 main.go:141] libmachine: (ha-744000-m02) DBG | pid 4339 is in state "Stopped"
	I0917 10:27:17.564903    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid...
	I0917 10:27:17.565097    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Using UUID 84417734-d0f3-4fed-a88c-11fa06a6299e
	I0917 10:27:17.591233    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Generated MAC 72:92:6:7e:7d:92
	I0917 10:27:17.591269    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:27:17.591443    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591484    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591541    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "84417734-d0f3-4fed-a88c-11fa06a6299e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:27:17.591573    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 84417734-d0f3-4fed-a88c-11fa06a6299e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:27:17.591591    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:27:17.592872    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Pid is 4469
	I0917 10:27:17.593367    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Attempt 0
	I0917 10:27:17.593378    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.593408    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4469
	I0917 10:27:17.595062    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Searching for 72:92:6:7e:7d:92 in /var/db/dhcpd_leases ...
	I0917 10:27:17.595127    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:27:17.595146    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0d6c}
	I0917 10:27:17.595182    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:27:17.595200    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:27:17.595210    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetConfigRaw
	I0917 10:27:17.595213    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:27:17.595230    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found match: 72:92:6:7e:7d:92
	I0917 10:27:17.595241    4448 main.go:141] libmachine: (ha-744000-m02) DBG | IP: 192.169.0.6
	I0917 10:27:17.595879    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:17.596065    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.596597    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:27:17.596609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.596723    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:17.596804    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:17.596890    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597096    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:17.597227    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:17.597374    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:17.597383    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:27:17.600658    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:27:17.609248    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:27:17.610115    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:17.610129    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:17.610159    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:17.610179    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:17.995972    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:27:17.995987    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:27:18.110623    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:18.110642    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:18.110651    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:18.110657    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:18.111459    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:27:18.111468    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:23.703289    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:23.703415    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:23.703428    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:23.727083    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:28.668165    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:28.668207    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668348    4448 buildroot.go:166] provisioning hostname "ha-744000-m02"
	I0917 10:27:28.668359    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668445    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.668533    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.668618    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.668945    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.669097    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.669106    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m02 && echo "ha-744000-m02" | sudo tee /etc/hostname
	I0917 10:27:28.749259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m02
	
	I0917 10:27:28.749274    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.749405    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.749513    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749700    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.749847    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.749994    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.750009    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:28.821499    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:28.821514    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:28.821523    4448 buildroot.go:174] setting up certificates
	I0917 10:27:28.821528    4448 provision.go:84] configureAuth start
	I0917 10:27:28.821534    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.821669    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:28.821789    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.821885    4448 provision.go:143] copyHostCerts
	I0917 10:27:28.821910    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.821968    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:28.821973    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.822114    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:28.822315    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822354    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:28.822366    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822450    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:28.822596    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822635    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:28.822639    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822717    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:28.822857    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m02 san=[127.0.0.1 192.169.0.6 ha-744000-m02 localhost minikube]
	I0917 10:27:28.955024    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:28.955079    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:28.955094    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.955239    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.955341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.955430    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.955526    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:28.994909    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:28.994978    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:27:29.014096    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:29.014170    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:27:29.033197    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:29.033261    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:29.052129    4448 provision.go:87] duration metric: took 230.592645ms to configureAuth
	I0917 10:27:29.052147    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:29.052322    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:29.052336    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:29.052473    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.052573    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.052670    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052755    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052827    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.052942    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.053069    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.053076    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:29.116259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:29.116272    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:29.116365    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:29.116377    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.116506    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.116595    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116715    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116793    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.116936    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.117075    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.117118    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:29.192146    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:29.192170    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.192303    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.192391    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192497    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192577    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.192705    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.192844    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.192856    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:30.870717    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:30.870732    4448 machine.go:96] duration metric: took 13.274043119s to provisionDockerMachine
	I0917 10:27:30.870747    4448 start.go:293] postStartSetup for "ha-744000-m02" (driver="hyperkit")
	I0917 10:27:30.870755    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:30.870766    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.870980    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:30.870994    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.871125    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.871248    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.871341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.871432    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.914708    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:30.918099    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:30.918113    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:30.918212    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:30.918387    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:30.918394    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:30.918605    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:30.929083    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:30.958117    4448 start.go:296] duration metric: took 87.359751ms for postStartSetup
	I0917 10:27:30.958138    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.958316    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:30.958328    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.958426    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.958518    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.958597    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.958669    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.998754    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:30.998827    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:31.054686    4448 fix.go:56] duration metric: took 13.567353836s for fixHost
	I0917 10:27:31.054713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.054850    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.054939    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055014    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055085    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.055233    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:31.055380    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:31.055386    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:31.119216    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594051.159133703
	
	I0917 10:27:31.119227    4448 fix.go:216] guest clock: 1726594051.159133703
	I0917 10:27:31.119235    4448 fix.go:229] Guest: 2024-09-17 10:27:31.159133703 -0700 PDT Remote: 2024-09-17 10:27:31.054702 -0700 PDT m=+32.632454337 (delta=104.431703ms)
	I0917 10:27:31.119246    4448 fix.go:200] guest clock delta is within tolerance: 104.431703ms
	I0917 10:27:31.119250    4448 start.go:83] releasing machines lock for "ha-744000-m02", held for 13.631947572s
	I0917 10:27:31.119267    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.119393    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:31.143966    4448 out.go:177] * Found network options:
	I0917 10:27:31.164924    4448 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 10:27:31.185989    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.186029    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.186884    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187158    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187319    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:31.187368    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	W0917 10:27:31.187382    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.187491    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:27:31.187550    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.187616    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187796    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.187813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187986    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.188154    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188197    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:31.188284    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:27:31.224656    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:31.224727    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:31.272646    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:31.272663    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.272743    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.288486    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:31.297401    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:31.306736    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.306808    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:31.316018    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.325058    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:31.334512    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.343837    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:31.353242    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:31.362032    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:31.371387    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:31.380261    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:31.388512    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:31.396778    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.496690    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:31.515568    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.515642    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:31.540737    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.552945    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:31.572641    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.584129    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.595235    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:31.619571    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.631020    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.646195    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:31.649235    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:31.657206    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:31.670819    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:31.769091    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:31.876805    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.876827    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:31.890932    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.985803    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:28:33.019399    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.033193508s)
	I0917 10:28:33.019489    4448 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:28:33.055431    4448 out.go:201] 
	W0917 10:28:33.077249    4448 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:27:29 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.538749787Z" level=info msg="Starting up"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.539378325Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.541084999Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=490
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.558457504Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573199339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573220908Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573258162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573299725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573411020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573446242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573553666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573587921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573599847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573607195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573685739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573880273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575404717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575443775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575555494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575590640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575719071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575763589Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.577951289Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578038703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578076919Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578089302Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578157091Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578202689Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580641100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580726566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580738845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580747690Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580756580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580765114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580772643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580781164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580790542Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580798635Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580806480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580814346Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580832655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580847752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580858242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580866931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580879634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580890299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580898230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580906575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580914939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580923943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580931177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580940500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580948337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580963023Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580980668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580989498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580996636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581056206Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581091289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581104079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581113194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581120030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581133102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581145706Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581334956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581407817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581460834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581473448Z" level=info msg="containerd successfully booted in 0.023887s"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.569483774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.598149093Z" level=info msg="Loading containers: start."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.772640000Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.832682998Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.874141710Z" level=info msg="Loading containers: done."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885048604Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885231945Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907500544Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907671752Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:27:30 ha-744000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.038076014Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039237554Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:27:32 ha-744000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039672384Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039926596Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039966362Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:33 ha-744000-m02 dockerd[1165]: time="2024-09-17T17:27:33.083664420Z" level=info msg="Starting up"
	Sep 17 17:28:33 ha-744000-m02 dockerd[1165]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:28:33.077325    4448 out.go:270] * 
	W0917 10:28:33.078575    4448 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:28:33.141292    4448 out.go:201] 
	
	
	==> Docker <==
	Sep 17 17:27:23 ha-744000 dockerd[1184]: time="2024-09-17T17:27:23.723049053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:27:44 ha-744000 dockerd[1184]: time="2024-09-17T17:27:44.678984946Z" level=info msg="shim disconnected" id=f8088538f8c3df59f5ff60bd1a281360ce7e3c58b0c25d042dd62fcfa88dcf7e namespace=moby
	Sep 17 17:27:44 ha-744000 dockerd[1184]: time="2024-09-17T17:27:44.679037338Z" level=warning msg="cleaning up after shim disconnected" id=f8088538f8c3df59f5ff60bd1a281360ce7e3c58b0c25d042dd62fcfa88dcf7e namespace=moby
	Sep 17 17:27:44 ha-744000 dockerd[1184]: time="2024-09-17T17:27:44.679046952Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:27:44 ha-744000 dockerd[1178]: time="2024-09-17T17:27:44.679623961Z" level=info msg="ignoring event" container=f8088538f8c3df59f5ff60bd1a281360ce7e3c58b0c25d042dd62fcfa88dcf7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:27:45 ha-744000 dockerd[1184]: time="2024-09-17T17:27:45.692431406Z" level=info msg="shim disconnected" id=f8ad30db3b448056ed93e2d805c2b8b365fc8dbe578b4b515549ac815f60dabc namespace=moby
	Sep 17 17:27:45 ha-744000 dockerd[1184]: time="2024-09-17T17:27:45.692501843Z" level=warning msg="cleaning up after shim disconnected" id=f8ad30db3b448056ed93e2d805c2b8b365fc8dbe578b4b515549ac815f60dabc namespace=moby
	Sep 17 17:27:45 ha-744000 dockerd[1184]: time="2024-09-17T17:27:45.692510697Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:27:45 ha-744000 dockerd[1178]: time="2024-09-17T17:27:45.693599135Z" level=info msg="ignoring event" container=f8ad30db3b448056ed93e2d805c2b8b365fc8dbe578b4b515549ac815f60dabc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.363808714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.363881678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.363895120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.364009200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982686773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982795889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982809691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982891719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908438866Z" level=info msg="shim disconnected" id=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908495753Z" level=warning msg="cleaning up after shim disconnected" id=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908504694Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1178]: time="2024-09-17T17:28:15.909053440Z" level=info msg="ignoring event" container=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.924890203Z" level=info msg="shim disconnected" id=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.925281000Z" level=warning msg="cleaning up after shim disconnected" id=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.925315687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1178]: time="2024-09-17T17:28:26.926104549Z" level=info msg="ignoring event" container=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6b1d67e1da594       175ffd71cce3d       30 seconds ago       Exited              kube-controller-manager   4                   ac5039c087055       kube-controller-manager-ha-744000
	66235de21ec80       6bab7719df100       39 seconds ago       Exited              kube-apiserver            3                   049299c96bb2c       kube-apiserver-ha-744000
	bbf0d2ebe5c6c       9aa1fad941575       About a minute ago   Running             kube-scheduler            2                   339a7c29b977e       kube-scheduler-ha-744000
	1e359ca4a791e       2e96e5913fc06       About a minute ago   Running             etcd                      2                   bf723b1d8bf7c       etcd-ha-744000
	6df162190be2a       38af8ddebf499       About a minute ago   Running             kube-vip                  1                   026314418eb78       kube-vip-ha-744000
	1b95d7a1c7708       6e38f40d628db       3 minutes ago        Exited              storage-provisioner       2                   375cde06a4bcf       storage-provisioner
	079da006755a7       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   f0eee6e67fe42       busybox-7dff88458-cn52t
	9f76145e8eaf7       12968670680f4       4 minutes ago        Exited              kindnet-cni               1                   8b4b5191649e7       kindnet-c59lr
	6a4aba3acb1e9       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   3888ce04e78db       coredns-7c65d6cfc9-khnlh
	fb8b83fe49a6e       60c005f310ff3       4 minutes ago        Exited              kube-proxy                1                   f1782d63db94f       kube-proxy-6xd2h
	24cfd031ec879       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   244f5bc456efc       coredns-7c65d6cfc9-j9jcc
	cfbfd57cf2b56       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   433c480eea542       kube-vip-ha-744000
	a7645ef2ae8dd       9aa1fad941575       5 minutes ago        Exited              kube-scheduler            1                   fbf79ae31cbab       kube-scheduler-ha-744000
	23a7e0d95a77c       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   55cb3d05ddf34       etcd-ha-744000
	
	
	==> coredns [24cfd031ec87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52682 - 33898 "HINFO IN 2709939145458862568.721558315158165230. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009931439s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[318103159]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.683) (total time: 30003ms):
	Trace[318103159]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:24:50.686)
	Trace[318103159]: [30.003131559s] [30.003131559s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1979128092]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1979128092]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1979128092]: [30.000652416s] [30.000652416s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1978210991]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1978210991]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1978210991]: [30.000766886s] [30.000766886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a4aba3acb1e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60360 - 19575 "HINFO IN 3607648931521447410.3411894034218696920. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009401347s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1960564509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1960564509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.746)
	Trace[1960564509]: [30.00213331s] [30.00213331s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1197674287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1197674287]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[1197674287]: [30.002759704s] [30.002759704s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[633118280]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30003ms):
	Trace[633118280]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[633118280]: [30.003193097s] [30.003193097s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0917 17:28:34.420712    2667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:28:34.422204    2667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:28:34.423818    2667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:28:34.425729    2667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:28:34.427345    2667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035209] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007985] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[Sep17 17:27] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006963] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.845078] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.235754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000048] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.478686] systemd-fstab-generator[466]: Ignoring "noauto" option for root device
	[  +0.092656] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +2.006519] systemd-fstab-generator[1106]: Ignoring "noauto" option for root device
	[  +0.259762] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.049883] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.051714] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.112681] systemd-fstab-generator[1170]: Ignoring "noauto" option for root device
	[  +2.485271] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	[  +0.103516] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.100618] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.134329] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.431436] systemd-fstab-generator[1594]: Ignoring "noauto" option for root device
	[  +6.580361] kauditd_printk_skb: 212 callbacks suppressed
	[ +21.488197] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [1e359ca4a791] <==
	{"level":"info","ts":"2024-09-17T17:28:29.551958Z","caller":"traceutil/trace.go:171","msg":"trace[797835779] range","detail":"{range_begin:; range_end:; }","duration":"7.003525995s","start":"2024-09-17T17:28:22.548422Z","end":"2024-09-17T17:28:29.551948Z","steps":["trace[797835779] 'agreement among raft nodes before linearized reading'  (duration: 7.003477011s)"],"step_count":1}
	{"level":"error","ts":"2024-09-17T17:28:29.552276Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-17T17:28:30.095994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:30.096135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:30.096178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:30.096214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:31.595018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:31.595144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:31.595174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:31.595206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:28:33.050771Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336208,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T17:28:33.095858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:33.095940Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:33.095959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:33.095976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:28:33.561199Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336208,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:34.062436Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336208,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:34.184124Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-744000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-09-17T17:28:34.257831Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"429e60237c9af887","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:28:34.257932Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"429e60237c9af887","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:28:34.564138Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336208,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T17:28:34.595326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:34.595353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:34.595363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:34.595374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	
	
	==> etcd [23a7e0d95a77] <==
	{"level":"warn","ts":"2024-09-17T17:26:50.587150Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.962871734s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.169.0.5\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587161Z","caller":"traceutil/trace.go:171","msg":"trace[618307594] range","detail":"{range_begin:/registry/masterleases/192.169.0.5; range_end:; }","duration":"6.962884303s","start":"2024-09-17T17:26:43.624274Z","end":"2024-09-17T17:26:50.587158Z","steps":["trace[618307594] 'agreement among raft nodes before linearized reading'  (duration: 6.96287178s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587171Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:43.624238Z","time spent":"6.962930406s","remote":"127.0.0.1:50532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":0,"request content":"key:\"/registry/masterleases/192.169.0.5\" "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.551739854s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587269Z","caller":"traceutil/trace.go:171","msg":"trace[474401785] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"1.551753744s","start":"2024-09-17T17:26:49.035511Z","end":"2024-09-17T17:26:50.587265Z","steps":["trace[474401785] 'agreement among raft nodes before linearized reading'  (duration: 1.551739815s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587280Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:49.035495Z","time spent":"1.551781157s","remote":"127.0.0.1:50648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.571949422s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587333Z","caller":"traceutil/trace.go:171","msg":"trace[779412434] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"3.571960909s","start":"2024-09-17T17:26:47.015370Z","end":"2024-09-17T17:26:50.587331Z","steps":["trace[779412434] 'agreement among raft nodes before linearized reading'  (duration: 3.571949266s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587344Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:47.015364Z","time spent":"3.571976754s","remote":"127.0.0.1:50872","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:45.985835Z","time spent":"4.601799065s","remote":"127.0.0.1:50768","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-17T17:26:50.686768Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T17:26:50.686883Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686894Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686906Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686956Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686981Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.687003Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.687012Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.698284Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:26:50.698463Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:26:50.698473Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-744000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:28:34 up 1 min,  0 users,  load average: 0.19, 0.11, 0.04
	Linux ha-744000 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9f76145e8eaf] <==
	I0917 17:26:11.511367       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:11.512152       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:11.512248       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:11.512772       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:11.512871       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504250       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:21.504302       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504625       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:21.504682       1 main.go:299] handling current node
	I0917 17:26:21.504706       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:21.504715       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:21.504816       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:21.504869       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:31.506309       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:31.506431       1 main.go:299] handling current node
	I0917 17:26:31.506449       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:31.506462       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:31.506621       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:31.506656       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:41.505932       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:41.506052       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:41.506553       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:41.506833       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:41.507226       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:41.507357       1 main.go:299] handling current node
	
	
	==> kube-apiserver [66235de21ec8] <==
	I0917 17:27:55.463149       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 17:27:55.464516       1 server.go:142] Version: v1.31.1
	I0917 17:27:55.464560       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:27:55.885027       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 17:27:55.888983       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:27:55.891463       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 17:27:55.891532       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 17:27:55.891764       1 instance.go:232] Using reconciler: lease
	W0917 17:28:15.884845       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 17:28:15.884898       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 17:28:15.892713       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6b1d67e1da59] <==
	I0917 17:28:05.497749       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:06.034875       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:06.034965       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:06.036148       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:06.036157       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:28:06.036166       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:06.036173       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 17:28:26.901132       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [fb8b83fe49a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:24:21.123827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:24:21.146583       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 17:24:21.146876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:24:21.179243       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:24:21.179464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:24:21.179596       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:24:21.183190       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:24:21.184459       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:24:21.184543       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:24:21.188244       1 config.go:199] "Starting service config controller"
	I0917 17:24:21.188350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:24:21.188588       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:24:21.188659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:24:21.192108       1 config.go:328] "Starting node config controller"
	I0917 17:24:21.192216       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:24:21.289888       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:24:21.289903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:24:21.293411       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7645ef2ae8d] <==
	E0917 17:23:52.361916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.361961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:23:52.361995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:23:52.362165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:23:52.362240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:23:52.362314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:23:52.362490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:23:52.362567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:23:52.362640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:23:52.362799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 17:23:53.372962       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 17:26:50.603688       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bbf0d2ebe5c6] <==
	E0917 17:28:15.522927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:16.119311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:16.119412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:16.899549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33350->192.169.0.5:8443: read: connection reset by peer
	E0917 17:28:16.899706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33350->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 17:28:16.899606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33332->192.169.0.5:8443: read: connection reset by peer
	E0917 17:28:16.899810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33332->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 17:28:26.515216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:26.515313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:26.886220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:26.886322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:27.658990       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:27.659043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:27.780949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:27.780999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:27.954747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:27.954795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:29.812244       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:29.812295       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:31.899209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:31.899308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:32.373782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:32.373902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:35.010233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:35.010333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 17 17:28:16 ha-744000 kubelet[1601]: E0917 17:28:16.943725    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:16 ha-744000 kubelet[1601]: E0917 17:28:16.965397    1601 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:28:16 ha-744000 kubelet[1601]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:28:16 ha-744000 kubelet[1601]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:28:16 ha-744000 kubelet[1601]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:28:16 ha-744000 kubelet[1601]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:28:19 ha-744000 kubelet[1601]: I0917 17:28:19.410946    1601 scope.go:117] "RemoveContainer" containerID="66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937"
	Sep 17 17:28:19 ha-744000 kubelet[1601]: E0917 17:28:19.411476    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-744000_kube-system(5122b3c5b6b107f6a71d263fb9595f1e)\"" pod="kube-system/kube-apiserver-ha-744000" podUID="5122b3c5b6b107f6a71d263fb9595f1e"
	Sep 17 17:28:22 ha-744000 kubelet[1601]: I0917 17:28:22.233957    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:24 ha-744000 kubelet[1601]: E0917 17:28:24.445275    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:28:24 ha-744000 kubelet[1601]: E0917 17:28:24.445342    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:28:24 ha-744000 kubelet[1601]: E0917 17:28:24.445468    1601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-744000.17f61820eeb0604a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-744000,UID:ha-744000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-744000,},FirstTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,LastTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-744000,}"
	Sep 17 17:28:25 ha-744000 kubelet[1601]: I0917 17:28:25.336242    1601 scope.go:117] "RemoveContainer" containerID="66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937"
	Sep 17 17:28:25 ha-744000 kubelet[1601]: E0917 17:28:25.336431    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-744000_kube-system(5122b3c5b6b107f6a71d263fb9595f1e)\"" pod="kube-system/kube-apiserver-ha-744000" podUID="5122b3c5b6b107f6a71d263fb9595f1e"
	Sep 17 17:28:26 ha-744000 kubelet[1601]: E0917 17:28:26.943874    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:27 ha-744000 kubelet[1601]: I0917 17:28:27.934687    1601 scope.go:117] "RemoveContainer" containerID="f8ad30db3b448056ed93e2d805c2b8b365fc8dbe578b4b515549ac815f60dabc"
	Sep 17 17:28:27 ha-744000 kubelet[1601]: I0917 17:28:27.935298    1601 scope.go:117] "RemoveContainer" containerID="6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721"
	Sep 17 17:28:27 ha-744000 kubelet[1601]: E0917 17:28:27.935403    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-744000_kube-system(87fd03b66c2a086675ca4f807d61ceb6)\"" pod="kube-system/kube-controller-manager-ha-744000" podUID="87fd03b66c2a086675ca4f807d61ceb6"
	Sep 17 17:28:31 ha-744000 kubelet[1601]: I0917 17:28:31.447049    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:32 ha-744000 kubelet[1601]: I0917 17:28:32.467960    1601 scope.go:117] "RemoveContainer" containerID="6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721"
	Sep 17 17:28:32 ha-744000 kubelet[1601]: E0917 17:28:32.468148    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-744000_kube-system(87fd03b66c2a086675ca4f807d61ceb6)\"" pod="kube-system/kube-controller-manager-ha-744000" podUID="87fd03b66c2a086675ca4f807d61ceb6"
	Sep 17 17:28:33 ha-744000 kubelet[1601]: E0917 17:28:33.656953    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:28:33 ha-744000 kubelet[1601]: E0917 17:28:33.656983    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:28:33 ha-744000 kubelet[1601]: I0917 17:28:33.734614    1601 scope.go:117] "RemoveContainer" containerID="6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721"
	Sep 17 17:28:33 ha-744000 kubelet[1601]: E0917 17:28:33.734746    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-744000_kube-system(87fd03b66c2a086675ca4f807d61ceb6)\"" pod="kube-system/kube-controller-manager-ha-744000" podUID="87fd03b66c2a086675ca4f807d61ceb6"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000: exit status 2 (147.347434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-744000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (97.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (23.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-744000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-744000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-744000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-744000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugi
n\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fa
lse,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000: exit status 2 (13.79534425s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 logs -n 25
E0917 10:28:58.513019    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 logs -n 25: (8.858683902s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m04 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp testdata/cp-test.txt                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000:/home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000 sudo cat                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m02:/home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m02 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03:/home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m03 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-744000 node stop m02 -v=7                                                                                                 | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-744000 node start m02 -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:22 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000 -v=7                                                                                                       | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-744000 -v=7                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT | 17 Sep 24 10:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:23 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	| node    | ha-744000 node delete m03 -v=7                                                                                               | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-744000 stop -v=7                                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true                                                                                                     | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 10:26:58
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 10:26:58.457695    4448 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:26:58.457869    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.457875    4448 out.go:358] Setting ErrFile to fd 2...
	I0917 10:26:58.457878    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.458048    4448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:26:58.459431    4448 out.go:352] Setting JSON to false
	I0917 10:26:58.481798    4448 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3385,"bootTime":1726590633,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:26:58.481949    4448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:26:58.503960    4448 out.go:177] * [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:26:58.546841    4448 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:26:58.546875    4448 notify.go:220] Checking for updates...
	I0917 10:26:58.589550    4448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:26:58.610683    4448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:26:58.631667    4448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:26:58.652583    4448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:26:58.673667    4448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:26:58.695561    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:26:58.696255    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.696327    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.705884    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52142
	I0917 10:26:58.706304    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.706746    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.706764    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.707014    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.707146    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.707350    4448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:26:58.707601    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.707628    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.716185    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 10:26:58.716537    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.716881    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.716897    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.717100    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.717222    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.745596    4448 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:26:58.787571    4448 start.go:297] selected driver: hyperkit
	I0917 10:26:58.787600    4448 start.go:901] validating driver "hyperkit" against &{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.787838    4448 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:26:58.788024    4448 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.788251    4448 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:26:58.797793    4448 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:26:58.801784    4448 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.801808    4448 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:26:58.804449    4448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:26:58.804489    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:26:58.804523    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:26:58.804589    4448 start.go:340] cluster config:
	{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.804704    4448 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.826385    4448 out.go:177] * Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	I0917 10:26:58.847617    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:26:58.847686    4448 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:26:58.847716    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:26:58.847928    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:26:58.847948    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:26:58.848103    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:58.849030    4448 start.go:360] acquireMachinesLock for ha-744000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:26:58.849203    4448 start.go:364] duration metric: took 147.892µs to acquireMachinesLock for "ha-744000"
	I0917 10:26:58.849244    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:26:58.849261    4448 fix.go:54] fixHost starting: 
	I0917 10:26:58.849685    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.849713    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.858847    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52146
	I0917 10:26:58.859214    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.859547    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.859558    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.859809    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.859941    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.860044    4448 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:26:58.860131    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.860222    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:26:58.861252    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.861281    4448 fix.go:112] recreateIfNeeded on ha-744000: state=Stopped err=<nil>
	I0917 10:26:58.861296    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	W0917 10:26:58.861379    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:26:58.903396    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000" ...
	I0917 10:26:58.924477    4448 main.go:141] libmachine: (ha-744000) Calling .Start
	I0917 10:26:58.924739    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.924805    4448 main.go:141] libmachine: (ha-744000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid
	I0917 10:26:58.926818    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.926830    4448 main.go:141] libmachine: (ha-744000) DBG | pid 4331 is in state "Stopped"
	I0917 10:26:58.926844    4448 main.go:141] libmachine: (ha-744000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid...
	I0917 10:26:58.927183    4448 main.go:141] libmachine: (ha-744000) DBG | Using UUID bcb5b96f-4d12-41bd-81db-c015832629bb
	I0917 10:26:59.037116    4448 main.go:141] libmachine: (ha-744000) DBG | Generated MAC 36:e3:93:ff:24:96
	I0917 10:26:59.037141    4448 main.go:141] libmachine: (ha-744000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:26:59.037239    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037264    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037302    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bcb5b96f-4d12-41bd-81db-c015832629bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:26:59.037345    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bcb5b96f-4d12-41bd-81db-c015832629bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:26:59.037367    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:26:59.039007    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Pid is 4462
	I0917 10:26:59.039387    4448 main.go:141] libmachine: (ha-744000) DBG | Attempt 0
	I0917 10:26:59.039405    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:59.039460    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4462
	I0917 10:26:59.040899    4448 main.go:141] libmachine: (ha-744000) DBG | Searching for 36:e3:93:ff:24:96 in /var/db/dhcpd_leases ...
	I0917 10:26:59.040968    4448 main.go:141] libmachine: (ha-744000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:26:59.040982    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:26:59.040991    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:26:59.041010    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:26:59.041033    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:26:59.041040    4448 main.go:141] libmachine: (ha-744000) DBG | Found match: 36:e3:93:ff:24:96
	I0917 10:26:59.041046    4448 main.go:141] libmachine: (ha-744000) DBG | IP: 192.169.0.5
	I0917 10:26:59.041079    4448 main.go:141] libmachine: (ha-744000) Calling .GetConfigRaw
	I0917 10:26:59.041673    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:26:59.041837    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:59.042200    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:26:59.042209    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:59.042313    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:26:59.042393    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:26:59.042497    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042594    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:26:59.042817    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:26:59.043033    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:26:59.043044    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:26:59.047101    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:26:59.098991    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:26:59.099689    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.099714    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.099723    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.099730    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.478495    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:26:59.478510    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:26:59.593167    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.593183    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.593195    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.593203    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.594075    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:26:59.594086    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:05.183473    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:05.183540    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:05.183555    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:05.208169    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:10.113996    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:10.114014    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114152    4448 buildroot.go:166] provisioning hostname "ha-744000"
	I0917 10:27:10.114163    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114266    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.114402    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.114494    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114584    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.114812    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.114997    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.115005    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000 && echo "ha-744000" | sudo tee /etc/hostname
	I0917 10:27:10.189969    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000
	
	I0917 10:27:10.189985    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.190121    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.190233    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190324    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190425    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.190562    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.190707    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.190718    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:10.253511    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:10.253531    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:10.253549    4448 buildroot.go:174] setting up certificates
	I0917 10:27:10.253555    4448 provision.go:84] configureAuth start
	I0917 10:27:10.253563    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.253694    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:10.253790    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.253930    4448 provision.go:143] copyHostCerts
	I0917 10:27:10.253971    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254039    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:10.254046    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254180    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:10.254370    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254409    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:10.254414    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254534    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:10.254684    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254722    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:10.254727    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254807    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:10.254980    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000 san=[127.0.0.1 192.169.0.5 ha-744000 localhost minikube]
	I0917 10:27:10.443647    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:10.443709    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:10.443745    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.444017    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.444217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.444311    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.444408    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:10.481724    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:10.481797    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:10.501694    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:10.501755    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 10:27:10.521451    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:10.521514    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 10:27:10.541883    4448 provision.go:87] duration metric: took 288.31459ms to configureAuth
	I0917 10:27:10.541895    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:10.542067    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:10.542085    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:10.542217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.542312    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.542387    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542467    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542559    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.542679    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.542806    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.542813    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:10.601508    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:10.601520    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:10.601615    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:10.601630    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.601764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.601865    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.601953    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.602043    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.602200    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.602343    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.602386    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:10.669944    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:10.669969    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.670102    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.670200    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670294    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670389    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.670510    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.670646    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.670658    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:12.369424    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:12.369438    4448 machine.go:96] duration metric: took 13.32714724s to provisionDockerMachine
	I0917 10:27:12.369451    4448 start.go:293] postStartSetup for "ha-744000" (driver="hyperkit")
	I0917 10:27:12.369463    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:12.369473    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.369675    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:12.369692    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.369803    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.369884    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.369975    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.370067    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.413317    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:12.417238    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:12.417272    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:12.417380    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:12.417569    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:12.417576    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:12.417788    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:12.427707    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:12.461431    4448 start.go:296] duration metric: took 91.970306ms for postStartSetup
	I0917 10:27:12.461460    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.461662    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:12.461675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.461764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.461863    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.461951    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.462049    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.498975    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:12.499039    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:12.553785    4448 fix.go:56] duration metric: took 13.704442272s for fixHost
	I0917 10:27:12.553808    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.553948    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.554064    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554158    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554243    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.554376    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:12.554528    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:12.554535    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:12.611703    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594032.650749132
	
	I0917 10:27:12.611715    4448 fix.go:216] guest clock: 1726594032.650749132
	I0917 10:27:12.611721    4448 fix.go:229] Guest: 2024-09-17 10:27:12.650749132 -0700 PDT Remote: 2024-09-17 10:27:12.553798 -0700 PDT m=+14.131667372 (delta=96.951132ms)
	I0917 10:27:12.611739    4448 fix.go:200] guest clock delta is within tolerance: 96.951132ms
	I0917 10:27:12.611750    4448 start.go:83] releasing machines lock for "ha-744000", held for 13.76244446s
	I0917 10:27:12.611768    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.611894    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:12.611995    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612340    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612438    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612522    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:12.612557    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612569    4448 ssh_runner.go:195] Run: cat /version.json
	I0917 10:27:12.612585    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612694    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612758    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612775    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612845    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612893    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612945    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.612977    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.648784    4448 ssh_runner.go:195] Run: systemctl --version
	I0917 10:27:12.693591    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:27:12.698718    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:12.698762    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:12.712125    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:12.712136    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.712235    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:12.730012    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:12.739057    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:12.747889    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:12.747935    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:12.757003    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.765797    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:12.774517    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.783400    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:12.792355    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:12.801214    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:12.810043    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:12.818991    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:12.826988    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:12.835075    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:12.932332    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:12.951203    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.951306    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:12.965837    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:12.981143    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:12.997816    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:13.008834    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.019726    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:13.047621    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.057914    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:13.072731    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:13.075778    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:13.083057    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:13.096420    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:13.190446    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:13.291417    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:13.291479    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:13.305208    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:13.405566    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:27:15.763788    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.358187677s)
	I0917 10:27:15.763854    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:27:15.774266    4448 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:27:15.786987    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:15.797461    4448 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:27:15.892958    4448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:27:15.992563    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.099704    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:27:16.113167    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:16.123851    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.230595    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:27:16.294806    4448 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:27:16.294898    4448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:27:16.300863    4448 start.go:563] Will wait 60s for crictl version
	I0917 10:27:16.300922    4448 ssh_runner.go:195] Run: which crictl
	I0917 10:27:16.304010    4448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:27:16.329606    4448 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:27:16.329710    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.346052    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.386748    4448 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:27:16.386784    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:16.387136    4448 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:27:16.390752    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.401571    4448 kubeadm.go:883] updating cluster {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 10:27:16.401664    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:16.401736    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.415872    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.415884    4448 docker.go:615] Images already preloaded, skipping extraction
	I0917 10:27:16.415970    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.427730    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.427747    4448 cache_images.go:84] Images are preloaded, skipping loading
	I0917 10:27:16.427754    4448 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 10:27:16.427829    4448 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:27:16.427915    4448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:27:16.463597    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:27:16.463611    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:27:16.463624    4448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:27:16.463640    4448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-744000 NodeName:ha-744000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:27:16.463730    4448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-744000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:27:16.463744    4448 kube-vip.go:115] generating kube-vip config ...
	I0917 10:27:16.463801    4448 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:27:16.478021    4448 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:27:16.478094    4448 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:27:16.478153    4448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:27:16.486558    4448 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:27:16.486616    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 10:27:16.494493    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 10:27:16.507997    4448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:27:16.521295    4448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 10:27:16.535199    4448 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:27:16.548668    4448 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:27:16.551530    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.561441    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.669349    4448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:27:16.684528    4448 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.5
	I0917 10:27:16.684541    4448 certs.go:194] generating shared ca certs ...
	I0917 10:27:16.684551    4448 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.684731    4448 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:27:16.684804    4448 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:27:16.684814    4448 certs.go:256] generating profile certs ...
	I0917 10:27:16.684905    4448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:27:16.684929    4448 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437
	I0917 10:27:16.684945    4448 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0917 10:27:16.754039    4448 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 ...
	I0917 10:27:16.754056    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437: {Name:mk79438fdb4dc3d525e8f682359147c957173c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754456    4448 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 ...
	I0917 10:27:16.754466    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437: {Name:mk6d911cd96357b3c3159c4d3a41f23afb7d4c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754680    4448 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt
	I0917 10:27:16.754895    4448 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key
	I0917 10:27:16.755149    4448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:27:16.755158    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:27:16.755205    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:27:16.755227    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:27:16.755246    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:27:16.755264    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:27:16.755283    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:27:16.755301    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:27:16.755318    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:27:16.755412    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:27:16.755459    4448 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:27:16.755467    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:27:16.755497    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:27:16.755530    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:27:16.755558    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:27:16.755623    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:16.755655    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:16.755675    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:27:16.755693    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:27:16.756123    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:27:16.777874    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:27:16.799280    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:27:16.827224    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:27:16.853838    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 10:27:16.907328    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:27:16.953101    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:27:16.997682    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:27:17.038330    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:27:17.061602    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:27:17.092949    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:27:17.123494    4448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:27:17.140334    4448 ssh_runner.go:195] Run: openssl version
	I0917 10:27:17.145978    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:27:17.156986    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161699    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161756    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.170341    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:27:17.187142    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:27:17.201375    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204789    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204832    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.208961    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:27:17.218128    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:27:17.227213    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230513    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230553    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.234703    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:27:17.243926    4448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:27:17.247354    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:27:17.251674    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:27:17.256090    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:27:17.260499    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:27:17.264702    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:27:17.268923    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:27:17.273119    4448 kubeadm.go:392] StartCluster: {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:27:17.273252    4448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:27:17.284758    4448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:27:17.293284    4448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:27:17.293296    4448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:27:17.293343    4448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:27:17.301434    4448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:27:17.301756    4448 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-744000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.301839    4448 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1558/kubeconfig needs updating (will repair): [kubeconfig missing "ha-744000" cluster setting kubeconfig missing "ha-744000" context setting]
	I0917 10:27:17.302016    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.302656    4448 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.302866    4448 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x4ad2720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:27:17.303186    4448 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 10:27:17.303370    4448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:27:17.311395    4448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 10:27:17.311410    4448 kubeadm.go:597] duration metric: took 18.109722ms to restartPrimaryControlPlane
	I0917 10:27:17.311416    4448 kubeadm.go:394] duration metric: took 38.30313ms to StartCluster
	I0917 10:27:17.311425    4448 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.311502    4448 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.311847    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.312074    4448 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:27:17.312086    4448 start.go:241] waiting for startup goroutines ...
	I0917 10:27:17.312098    4448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:27:17.312209    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.356558    4448 out.go:177] * Enabled addons: 
	I0917 10:27:17.377453    4448 addons.go:510] duration metric: took 65.359314ms for enable addons: enabled=[]
	I0917 10:27:17.377491    4448 start.go:246] waiting for cluster config update ...
	I0917 10:27:17.377508    4448 start.go:255] writing updated cluster config ...
	I0917 10:27:17.399517    4448 out.go:201] 
	I0917 10:27:17.421006    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.421153    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.443394    4448 out.go:177] * Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	I0917 10:27:17.485722    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:17.485786    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:27:17.485968    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:27:17.485986    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:27:17.486112    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.487099    4448 start.go:360] acquireMachinesLock for ha-744000-m02: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:27:17.487205    4448 start.go:364] duration metric: took 81.172µs to acquireMachinesLock for "ha-744000-m02"
	I0917 10:27:17.487235    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:27:17.487243    4448 fix.go:54] fixHost starting: m02
	I0917 10:27:17.487683    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:27:17.487720    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:27:17.497503    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52168
	I0917 10:27:17.498037    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:27:17.498462    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:27:17.498477    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:27:17.498776    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:27:17.499011    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.499112    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:27:17.499198    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.499265    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:27:17.500274    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.500290    4448 fix.go:112] recreateIfNeeded on ha-744000-m02: state=Stopped err=<nil>
	I0917 10:27:17.500304    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	W0917 10:27:17.500387    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:27:17.542418    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m02" ...
	I0917 10:27:17.563504    4448 main.go:141] libmachine: (ha-744000-m02) Calling .Start
	I0917 10:27:17.563707    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.563730    4448 main.go:141] libmachine: (ha-744000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid
	I0917 10:27:17.564875    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.564887    4448 main.go:141] libmachine: (ha-744000-m02) DBG | pid 4339 is in state "Stopped"
	I0917 10:27:17.564903    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid...
	I0917 10:27:17.565097    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Using UUID 84417734-d0f3-4fed-a88c-11fa06a6299e
	I0917 10:27:17.591233    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Generated MAC 72:92:6:7e:7d:92
	I0917 10:27:17.591269    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:27:17.591443    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591484    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591541    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "84417734-d0f3-4fed-a88c-11fa06a6299e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:27:17.591573    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 84417734-d0f3-4fed-a88c-11fa06a6299e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:27:17.591591    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:27:17.592872    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Pid is 4469
	I0917 10:27:17.593367    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Attempt 0
	I0917 10:27:17.593378    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.593408    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4469
	I0917 10:27:17.595062    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Searching for 72:92:6:7e:7d:92 in /var/db/dhcpd_leases ...
	I0917 10:27:17.595127    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:27:17.595146    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0d6c}
	I0917 10:27:17.595182    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:27:17.595200    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:27:17.595210    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetConfigRaw
	I0917 10:27:17.595213    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:27:17.595230    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found match: 72:92:6:7e:7d:92
	I0917 10:27:17.595241    4448 main.go:141] libmachine: (ha-744000-m02) DBG | IP: 192.169.0.6
	I0917 10:27:17.595879    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:17.596065    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.596597    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:27:17.596609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.596723    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:17.596804    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:17.596890    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597096    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:17.597227    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:17.597374    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:17.597383    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:27:17.600658    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:27:17.609248    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:27:17.610115    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:17.610129    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:17.610159    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:17.610179    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:17.995972    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:27:17.995987    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:27:18.110623    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:18.110642    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:18.110651    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:18.110657    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:18.111459    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:27:18.111468    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:23.703289    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:23.703415    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:23.703428    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:23.727083    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:28.668165    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:28.668207    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668348    4448 buildroot.go:166] provisioning hostname "ha-744000-m02"
	I0917 10:27:28.668359    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668445    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.668533    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.668618    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.668945    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.669097    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.669106    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m02 && echo "ha-744000-m02" | sudo tee /etc/hostname
	I0917 10:27:28.749259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m02
	
	I0917 10:27:28.749274    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.749405    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.749513    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749700    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.749847    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.749994    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.750009    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:28.821499    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:28.821514    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:28.821523    4448 buildroot.go:174] setting up certificates
	I0917 10:27:28.821528    4448 provision.go:84] configureAuth start
	I0917 10:27:28.821534    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.821669    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:28.821789    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.821885    4448 provision.go:143] copyHostCerts
	I0917 10:27:28.821910    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.821968    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:28.821973    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.822114    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:28.822315    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822354    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:28.822366    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822450    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:28.822596    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822635    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:28.822639    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822717    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:28.822857    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m02 san=[127.0.0.1 192.169.0.6 ha-744000-m02 localhost minikube]
	I0917 10:27:28.955024    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:28.955079    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:28.955094    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.955239    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.955341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.955430    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.955526    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:28.994909    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:28.994978    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:27:29.014096    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:29.014170    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:27:29.033197    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:29.033261    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:29.052129    4448 provision.go:87] duration metric: took 230.592645ms to configureAuth
	I0917 10:27:29.052147    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:29.052322    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:29.052336    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:29.052473    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.052573    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.052670    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052755    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052827    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.052942    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.053069    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.053076    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:29.116259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:29.116272    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:29.116365    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:29.116377    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.116506    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.116595    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116715    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116793    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.116936    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.117075    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.117118    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:29.192146    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:29.192170    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.192303    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.192391    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192497    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192577    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.192705    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.192844    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.192856    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:30.870717    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:30.870732    4448 machine.go:96] duration metric: took 13.274043119s to provisionDockerMachine
	I0917 10:27:30.870747    4448 start.go:293] postStartSetup for "ha-744000-m02" (driver="hyperkit")
	I0917 10:27:30.870755    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:30.870766    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.870980    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:30.870994    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.871125    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.871248    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.871341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.871432    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.914708    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:30.918099    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:30.918113    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:30.918212    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:30.918387    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:30.918394    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:30.918605    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:30.929083    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:30.958117    4448 start.go:296] duration metric: took 87.359751ms for postStartSetup
	I0917 10:27:30.958138    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.958316    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:30.958328    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.958426    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.958518    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.958597    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.958669    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.998754    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:30.998827    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:31.054686    4448 fix.go:56] duration metric: took 13.567353836s for fixHost
	I0917 10:27:31.054713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.054850    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.054939    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055014    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055085    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.055233    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:31.055380    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:31.055386    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:31.119216    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594051.159133703
	
	I0917 10:27:31.119227    4448 fix.go:216] guest clock: 1726594051.159133703
	I0917 10:27:31.119235    4448 fix.go:229] Guest: 2024-09-17 10:27:31.159133703 -0700 PDT Remote: 2024-09-17 10:27:31.054702 -0700 PDT m=+32.632454337 (delta=104.431703ms)
	I0917 10:27:31.119246    4448 fix.go:200] guest clock delta is within tolerance: 104.431703ms
	I0917 10:27:31.119250    4448 start.go:83] releasing machines lock for "ha-744000-m02", held for 13.631947572s
	I0917 10:27:31.119267    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.119393    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:31.143966    4448 out.go:177] * Found network options:
	I0917 10:27:31.164924    4448 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 10:27:31.185989    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.186029    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.186884    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187158    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187319    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:31.187368    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	W0917 10:27:31.187382    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.187491    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:27:31.187550    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.187616    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187796    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.187813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187986    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.188154    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188197    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:31.188284    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:27:31.224656    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:31.224727    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:31.272646    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:31.272663    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.272743    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.288486    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:31.297401    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:31.306736    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.306808    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:31.316018    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.325058    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:31.334512    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.343837    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:31.353242    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:31.362032    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:31.371387    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:31.380261    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:31.388512    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:31.396778    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.496690    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:31.515568    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.515642    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:31.540737    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.552945    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:31.572641    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.584129    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.595235    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:31.619571    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.631020    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.646195    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:31.649235    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:31.657206    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:31.670819    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:31.769091    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:31.876805    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.876827    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:31.890932    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.985803    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:28:33.019399    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.033193508s)
	I0917 10:28:33.019489    4448 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:28:33.055431    4448 out.go:201] 
	W0917 10:28:33.077249    4448 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:27:29 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.538749787Z" level=info msg="Starting up"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.539378325Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.541084999Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=490
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.558457504Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573199339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573220908Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573258162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573299725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573411020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573446242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573553666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573587921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573599847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573607195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573685739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573880273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575404717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575443775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575555494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575590640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575719071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575763589Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.577951289Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578038703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578076919Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578089302Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578157091Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578202689Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580641100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580726566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580738845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580747690Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580756580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580765114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580772643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580781164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580790542Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580798635Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580806480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580814346Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580832655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580847752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580858242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580866931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580879634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580890299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580898230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580906575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580914939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580923943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580931177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580940500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580948337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580963023Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580980668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580989498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580996636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581056206Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581091289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581104079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581113194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581120030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581133102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581145706Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581334956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581407817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581460834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581473448Z" level=info msg="containerd successfully booted in 0.023887s"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.569483774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.598149093Z" level=info msg="Loading containers: start."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.772640000Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.832682998Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.874141710Z" level=info msg="Loading containers: done."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885048604Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885231945Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907500544Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907671752Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:27:30 ha-744000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.038076014Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039237554Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:27:32 ha-744000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039672384Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039926596Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039966362Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:33 ha-744000-m02 dockerd[1165]: time="2024-09-17T17:27:33.083664420Z" level=info msg="Starting up"
	Sep 17 17:28:33 ha-744000-m02 dockerd[1165]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:28:33.077325    4448 out.go:270] * 
	W0917 10:28:33.078575    4448 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:28:33.141292    4448 out.go:201] 
	
	
	==> Docker <==
	Sep 17 17:27:45 ha-744000 dockerd[1178]: time="2024-09-17T17:27:45.693599135Z" level=info msg="ignoring event" container=f8ad30db3b448056ed93e2d805c2b8b365fc8dbe578b4b515549ac815f60dabc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.363808714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.363881678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.363895120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.364009200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982686773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982795889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982809691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982891719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908438866Z" level=info msg="shim disconnected" id=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908495753Z" level=warning msg="cleaning up after shim disconnected" id=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908504694Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1178]: time="2024-09-17T17:28:15.909053440Z" level=info msg="ignoring event" container=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.924890203Z" level=info msg="shim disconnected" id=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.925281000Z" level=warning msg="cleaning up after shim disconnected" id=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.925315687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1178]: time="2024-09-17T17:28:26.926104549Z" level=info msg="ignoring event" container=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981215245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981300627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981313170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981748827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988154215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988302802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988330908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988447275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3757e12da538a       175ffd71cce3d       4 seconds ago        Running             kube-controller-manager   5                   ac5039c087055       kube-controller-manager-ha-744000
	b526083efb4fc       6bab7719df100       15 seconds ago       Running             kube-apiserver            4                   049299c96bb2c       kube-apiserver-ha-744000
	6b1d67e1da594       175ffd71cce3d       46 seconds ago       Exited              kube-controller-manager   4                   ac5039c087055       kube-controller-manager-ha-744000
	66235de21ec80       6bab7719df100       55 seconds ago       Exited              kube-apiserver            3                   049299c96bb2c       kube-apiserver-ha-744000
	bbf0d2ebe5c6c       9aa1fad941575       About a minute ago   Running             kube-scheduler            2                   339a7c29b977e       kube-scheduler-ha-744000
	1e359ca4a791e       2e96e5913fc06       About a minute ago   Running             etcd                      2                   bf723b1d8bf7c       etcd-ha-744000
	6df162190be2a       38af8ddebf499       About a minute ago   Running             kube-vip                  1                   026314418eb78       kube-vip-ha-744000
	1b95d7a1c7708       6e38f40d628db       3 minutes ago        Exited              storage-provisioner       2                   375cde06a4bcf       storage-provisioner
	079da006755a7       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   f0eee6e67fe42       busybox-7dff88458-cn52t
	9f76145e8eaf7       12968670680f4       4 minutes ago        Exited              kindnet-cni               1                   8b4b5191649e7       kindnet-c59lr
	6a4aba3acb1e9       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   3888ce04e78db       coredns-7c65d6cfc9-khnlh
	fb8b83fe49a6e       60c005f310ff3       4 minutes ago        Exited              kube-proxy                1                   f1782d63db94f       kube-proxy-6xd2h
	24cfd031ec879       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   244f5bc456efc       coredns-7c65d6cfc9-j9jcc
	cfbfd57cf2b56       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   433c480eea542       kube-vip-ha-744000
	a7645ef2ae8dd       9aa1fad941575       5 minutes ago        Exited              kube-scheduler            1                   fbf79ae31cbab       kube-scheduler-ha-744000
	23a7e0d95a77c       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   55cb3d05ddf34       etcd-ha-744000
	
	
	==> coredns [24cfd031ec87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52682 - 33898 "HINFO IN 2709939145458862568.721558315158165230. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009931439s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[318103159]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.683) (total time: 30003ms):
	Trace[318103159]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:24:50.686)
	Trace[318103159]: [30.003131559s] [30.003131559s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1979128092]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1979128092]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1979128092]: [30.000652416s] [30.000652416s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1978210991]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1978210991]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1978210991]: [30.000766886s] [30.000766886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a4aba3acb1e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60360 - 19575 "HINFO IN 3607648931521447410.3411894034218696920. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009401347s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1960564509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1960564509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.746)
	Trace[1960564509]: [30.00213331s] [30.00213331s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1197674287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1197674287]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[1197674287]: [30.002759704s] [30.002759704s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[633118280]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30003ms):
	Trace[633118280]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[633118280]: [30.003193097s] [30.003193097s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0917 17:28:57.379243    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:59444->127.0.0.1:8443: read: connection reset by peer"
	E0917 17:28:57.381033    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:28:57.382480    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:28:57.384087    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:28:57.385814    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035209] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007985] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[Sep17 17:27] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006963] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.845078] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.235754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000048] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.478686] systemd-fstab-generator[466]: Ignoring "noauto" option for root device
	[  +0.092656] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +2.006519] systemd-fstab-generator[1106]: Ignoring "noauto" option for root device
	[  +0.259762] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.049883] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.051714] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.112681] systemd-fstab-generator[1170]: Ignoring "noauto" option for root device
	[  +2.485271] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	[  +0.103516] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.100618] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.134329] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.431436] systemd-fstab-generator[1594]: Ignoring "noauto" option for root device
	[  +6.580361] kauditd_printk_skb: 212 callbacks suppressed
	[ +21.488197] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [1e359ca4a791] <==
	{"level":"warn","ts":"2024-09-17T17:28:53.050929Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:53.551985Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:54.054734Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T17:28:54.096107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:54.096182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:54.096215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:54.096232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:28:54.262007Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"429e60237c9af887","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:28:54.262294Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"429e60237c9af887","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:28:54.563135Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:55.064887Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:55.185700Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-744000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-09-17T17:28:55.566478Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T17:28:55.595674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:55.595755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:55.595778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:55.595795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:28:56.067995Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:56.573863Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:57.076627Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T17:28:57.095677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:57.095768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:57.095787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:57.095804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:28:57.577216Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	
	
	==> etcd [23a7e0d95a77] <==
	{"level":"warn","ts":"2024-09-17T17:26:50.587150Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.962871734s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.169.0.5\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587161Z","caller":"traceutil/trace.go:171","msg":"trace[618307594] range","detail":"{range_begin:/registry/masterleases/192.169.0.5; range_end:; }","duration":"6.962884303s","start":"2024-09-17T17:26:43.624274Z","end":"2024-09-17T17:26:50.587158Z","steps":["trace[618307594] 'agreement among raft nodes before linearized reading'  (duration: 6.96287178s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587171Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:43.624238Z","time spent":"6.962930406s","remote":"127.0.0.1:50532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":0,"request content":"key:\"/registry/masterleases/192.169.0.5\" "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.551739854s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587269Z","caller":"traceutil/trace.go:171","msg":"trace[474401785] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"1.551753744s","start":"2024-09-17T17:26:49.035511Z","end":"2024-09-17T17:26:50.587265Z","steps":["trace[474401785] 'agreement among raft nodes before linearized reading'  (duration: 1.551739815s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587280Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:49.035495Z","time spent":"1.551781157s","remote":"127.0.0.1:50648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.571949422s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587333Z","caller":"traceutil/trace.go:171","msg":"trace[779412434] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"3.571960909s","start":"2024-09-17T17:26:47.015370Z","end":"2024-09-17T17:26:50.587331Z","steps":["trace[779412434] 'agreement among raft nodes before linearized reading'  (duration: 3.571949266s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587344Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:47.015364Z","time spent":"3.571976754s","remote":"127.0.0.1:50872","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:45.985835Z","time spent":"4.601799065s","remote":"127.0.0.1:50768","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-17T17:26:50.686768Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T17:26:50.686883Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686894Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686906Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686956Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686981Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.687003Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.687012Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.698284Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:26:50.698463Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:26:50.698473Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-744000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:28:57 up 1 min,  0 users,  load average: 0.14, 0.10, 0.04
	Linux ha-744000 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9f76145e8eaf] <==
	I0917 17:26:11.511367       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:11.512152       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:11.512248       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:11.512772       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:11.512871       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504250       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:21.504302       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504625       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:21.504682       1 main.go:299] handling current node
	I0917 17:26:21.504706       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:21.504715       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:21.504816       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:21.504869       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:31.506309       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:31.506431       1 main.go:299] handling current node
	I0917 17:26:31.506449       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:31.506462       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:31.506621       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:31.506656       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:41.505932       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:41.506052       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:41.506553       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:41.506833       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:41.507226       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:41.507357       1 main.go:299] handling current node
	
	
	==> kube-apiserver [66235de21ec8] <==
	command /bin/bash -c "docker logs --tail 25 66235de21ec8" failed with error: /bin/bash -c "docker logs --tail 25 66235de21ec8": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: 66235de21ec8
	
	
	==> kube-apiserver [b526083efb4f] <==
	I0917 17:28:36.086684       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 17:28:36.088246       1 server.go:142] Version: v1.31.1
	I0917 17:28:36.088285       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:36.354102       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 17:28:36.357696       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:28:36.370175       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 17:28:36.370322       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 17:28:36.370574       1 instance.go:232] Using reconciler: lease
	W0917 17:28:56.356155       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 17:28:56.356428       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 17:28:56.372981       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 17:28:56.373006       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3757e12da538] <==
	I0917 17:28:47.524594       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:47.708212       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:47.708245       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:47.709288       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:28:47.709434       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:47.709442       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:47.709457       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [6b1d67e1da59] <==
	I0917 17:28:05.497749       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:06.034875       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:06.034965       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:06.036148       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:06.036157       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:28:06.036166       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:06.036173       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 17:28:26.901132       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [fb8b83fe49a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:24:21.123827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:24:21.146583       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 17:24:21.146876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:24:21.179243       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:24:21.179464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:24:21.179596       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:24:21.183190       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:24:21.184459       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:24:21.184543       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:24:21.188244       1 config.go:199] "Starting service config controller"
	I0917 17:24:21.188350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:24:21.188588       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:24:21.188659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:24:21.192108       1 config.go:328] "Starting node config controller"
	I0917 17:24:21.192216       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:24:21.289888       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:24:21.289903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:24:21.293411       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7645ef2ae8d] <==
	E0917 17:23:52.361916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.361961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:23:52.361995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:23:52.362165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:23:52.362240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:23:52.362314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:23:52.362490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:23:52.362567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:23:52.362640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:23:52.362799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 17:23:53.372962       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 17:26:50.603688       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bbf0d2ebe5c6] <==
	E0917 17:28:27.659043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:27.780949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:27.780999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:27.954747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:27.954795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:29.812244       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:29.812295       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:31.899209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:31.899308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:32.373782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:32.373902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:35.010233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:35.010333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:46.379121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:46.379226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:47.366426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:47.366523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:48.382767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:48.383125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:49.591786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:49.592123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:49.647843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:49.648127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:50.257456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:50.257486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 17 17:28:33 ha-744000 kubelet[1601]: E0917 17:28:33.734746    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-744000_kube-system(87fd03b66c2a086675ca4f807d61ceb6)\"" pod="kube-system/kube-controller-manager-ha-744000" podUID="87fd03b66c2a086675ca4f807d61ceb6"
	Sep 17 17:28:35 ha-744000 kubelet[1601]: I0917 17:28:35.943283    1601 scope.go:117] "RemoveContainer" containerID="66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937"
	Sep 17 17:28:36 ha-744000 kubelet[1601]: W0917 17:28:36.729928    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:36 ha-744000 kubelet[1601]: E0917 17:28:36.730050    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:36 ha-744000 kubelet[1601]: E0917 17:28:36.729964    1601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-744000.17f61820eeb0604a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-744000,UID:ha-744000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-744000,},FirstTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,LastTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-744000,}"
	Sep 17 17:28:36 ha-744000 kubelet[1601]: E0917 17:28:36.944498    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:40 ha-744000 kubelet[1601]: I0917 17:28:40.664516    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:42 ha-744000 kubelet[1601]: E0917 17:28:42.873716    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:28:42 ha-744000 kubelet[1601]: E0917 17:28:42.873771    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:28:45 ha-744000 kubelet[1601]: W0917 17:28:45.945277    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:45 ha-744000 kubelet[1601]: E0917 17:28:45.945790    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:46 ha-744000 kubelet[1601]: I0917 17:28:46.944782    1601 scope.go:117] "RemoveContainer" containerID="6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721"
	Sep 17 17:28:46 ha-744000 kubelet[1601]: E0917 17:28:46.945144    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:49 ha-744000 kubelet[1601]: E0917 17:28:49.017860    1601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-744000.17f61820eeb0604a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-744000,UID:ha-744000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-744000,},FirstTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,LastTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-744000,}"
	Sep 17 17:28:49 ha-744000 kubelet[1601]: I0917 17:28:49.875548    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:52 ha-744000 kubelet[1601]: E0917 17:28:52.089297    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:28:52 ha-744000 kubelet[1601]: E0917 17:28:52.090072    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:28:56 ha-744000 kubelet[1601]: E0917 17:28:56.945624    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: I0917 17:28:57.244872    1601 scope.go:117] "RemoveContainer" containerID="66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: I0917 17:28:57.245459    1601 scope.go:117] "RemoveContainer" containerID="b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: E0917 17:28:57.245556    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-744000_kube-system(5122b3c5b6b107f6a71d263fb9595f1e)\"" pod="kube-system/kube-apiserver-ha-744000" podUID="5122b3c5b6b107f6a71d263fb9595f1e"
	Sep 17 17:28:58 ha-744000 kubelet[1601]: W0917 17:28:58.233904    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:58 ha-744000 kubelet[1601]: E0917 17:28:58.234030    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:58 ha-744000 kubelet[1601]: W0917 17:28:58.233904    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-744000&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:58 ha-744000 kubelet[1601]: E0917 17:28:58.234177    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-744000&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000: exit status 2 (149.162575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-744000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (23.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (2.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-744000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-744000 --control-plane -v=7 --alsologtostderr: exit status 103 (243.313121ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-744000-m02 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-744000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:28:58.920583    4516 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:28:58.920863    4516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:28:58.920869    4516 out.go:358] Setting ErrFile to fd 2...
	I0917 10:28:58.920873    4516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:28:58.921062    4516 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:28:58.921434    4516 mustload.go:65] Loading cluster: ha-744000
	I0917 10:28:58.921778    4516 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:28:58.922130    4516 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:28:58.922179    4516 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:28:58.930633    4516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52254
	I0917 10:28:58.931032    4516 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:28:58.931412    4516 main.go:141] libmachine: Using API Version  1
	I0917 10:28:58.931440    4516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:28:58.931700    4516 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:28:58.931831    4516 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:28:58.931927    4516 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:28:58.931982    4516 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4462
	I0917 10:28:58.933041    4516 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:28:58.933311    4516 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:28:58.933334    4516 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:28:58.941635    4516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52256
	I0917 10:28:58.941979    4516 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:28:58.942398    4516 main.go:141] libmachine: Using API Version  1
	I0917 10:28:58.942421    4516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:28:58.942630    4516 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:28:58.942741    4516 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:28:58.943092    4516 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:28:58.943114    4516 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:28:58.951483    4516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52258
	I0917 10:28:58.951801    4516 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:28:58.952119    4516 main.go:141] libmachine: Using API Version  1
	I0917 10:28:58.952130    4516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:28:58.952321    4516 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:28:58.952428    4516 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:28:58.952512    4516 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:28:58.952586    4516 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4469
	I0917 10:28:58.953621    4516 host.go:66] Checking if "ha-744000-m02" exists ...
	I0917 10:28:58.953869    4516 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:28:58.953894    4516 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:28:58.962190    4516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52260
	I0917 10:28:58.962565    4516 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:28:58.962885    4516 main.go:141] libmachine: Using API Version  1
	I0917 10:28:58.962897    4516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:28:58.963141    4516 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:28:58.963261    4516 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:28:58.963370    4516 api_server.go:166] Checking apiserver status ...
	I0917 10:28:58.963437    4516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:28:58.963457    4516 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:28:58.963587    4516 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:28:58.963687    4516 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:28:58.963776    4516 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:28:58.963872    4516 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	W0917 10:28:59.002501    4516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W0917 10:28:59.002679    4516 out.go:270] ! The control-plane node ha-744000 apiserver is not running (will try others): (state=Stopped)
	! The control-plane node ha-744000 apiserver is not running (will try others): (state=Stopped)
	I0917 10:28:59.002687    4516 api_server.go:166] Checking apiserver status ...
	I0917 10:28:59.002746    4516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:28:59.002764    4516 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:28:59.002889    4516 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:28:59.002970    4516 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:28:59.003073    4516 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:28:59.003151    4516 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:28:59.044286    4516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:28:59.065711    4516 out.go:177] * The control-plane node ha-744000-m02 apiserver is not running: (state=Stopped)
	I0917 10:28:59.086381    4516 out.go:177]   To start a cluster, run: "minikube start -p ha-744000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-744000 --control-plane -v=7 --alsologtostderr" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000: exit status 2 (148.68396ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 logs -n 25: (2.196042686s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m04 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp testdata/cp-test.txt                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000:/home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000 sudo cat                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m02:/home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m02 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03:/home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m03 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-744000 node stop m02 -v=7                                                                                                 | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-744000 node start m02 -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:22 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000 -v=7                                                                                                       | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-744000 -v=7                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT | 17 Sep 24 10:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:23 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	| node    | ha-744000 node delete m03 -v=7                                                                                               | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-744000 stop -v=7                                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true                                                                                                     | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-744000                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:28 PDT |                     |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 10:26:58
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 10:26:58.457695    4448 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:26:58.457869    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.457875    4448 out.go:358] Setting ErrFile to fd 2...
	I0917 10:26:58.457878    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.458048    4448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:26:58.459431    4448 out.go:352] Setting JSON to false
	I0917 10:26:58.481798    4448 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3385,"bootTime":1726590633,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:26:58.481949    4448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:26:58.503960    4448 out.go:177] * [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:26:58.546841    4448 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:26:58.546875    4448 notify.go:220] Checking for updates...
	I0917 10:26:58.589550    4448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:26:58.610683    4448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:26:58.631667    4448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:26:58.652583    4448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:26:58.673667    4448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:26:58.695561    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:26:58.696255    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.696327    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.705884    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52142
	I0917 10:26:58.706304    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.706746    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.706764    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.707014    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.707146    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.707350    4448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:26:58.707601    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.707628    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.716185    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 10:26:58.716537    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.716881    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.716897    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.717100    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.717222    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.745596    4448 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:26:58.787571    4448 start.go:297] selected driver: hyperkit
	I0917 10:26:58.787600    4448 start.go:901] validating driver "hyperkit" against &{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.787838    4448 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:26:58.788024    4448 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.788251    4448 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:26:58.797793    4448 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:26:58.801784    4448 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.801808    4448 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:26:58.804449    4448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:26:58.804489    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:26:58.804523    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:26:58.804589    4448 start.go:340] cluster config:
	{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.804704    4448 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.826385    4448 out.go:177] * Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	I0917 10:26:58.847617    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:26:58.847686    4448 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:26:58.847716    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:26:58.847928    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:26:58.847948    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:26:58.848103    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:58.849030    4448 start.go:360] acquireMachinesLock for ha-744000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:26:58.849203    4448 start.go:364] duration metric: took 147.892µs to acquireMachinesLock for "ha-744000"
	I0917 10:26:58.849244    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:26:58.849261    4448 fix.go:54] fixHost starting: 
	I0917 10:26:58.849685    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.849713    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.858847    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52146
	I0917 10:26:58.859214    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.859547    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.859558    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.859809    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.859941    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.860044    4448 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:26:58.860131    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.860222    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:26:58.861252    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.861281    4448 fix.go:112] recreateIfNeeded on ha-744000: state=Stopped err=<nil>
	I0917 10:26:58.861296    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	W0917 10:26:58.861379    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:26:58.903396    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000" ...
	I0917 10:26:58.924477    4448 main.go:141] libmachine: (ha-744000) Calling .Start
	I0917 10:26:58.924739    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.924805    4448 main.go:141] libmachine: (ha-744000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid
	I0917 10:26:58.926818    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.926830    4448 main.go:141] libmachine: (ha-744000) DBG | pid 4331 is in state "Stopped"
	I0917 10:26:58.926844    4448 main.go:141] libmachine: (ha-744000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid...
	I0917 10:26:58.927183    4448 main.go:141] libmachine: (ha-744000) DBG | Using UUID bcb5b96f-4d12-41bd-81db-c015832629bb
	I0917 10:26:59.037116    4448 main.go:141] libmachine: (ha-744000) DBG | Generated MAC 36:e3:93:ff:24:96
	I0917 10:26:59.037141    4448 main.go:141] libmachine: (ha-744000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:26:59.037239    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037264    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037302    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bcb5b96f-4d12-41bd-81db-c015832629bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:26:59.037345    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bcb5b96f-4d12-41bd-81db-c015832629bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:26:59.037367    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:26:59.039007    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Pid is 4462
	I0917 10:26:59.039387    4448 main.go:141] libmachine: (ha-744000) DBG | Attempt 0
	I0917 10:26:59.039405    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:59.039460    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4462
	I0917 10:26:59.040899    4448 main.go:141] libmachine: (ha-744000) DBG | Searching for 36:e3:93:ff:24:96 in /var/db/dhcpd_leases ...
	I0917 10:26:59.040968    4448 main.go:141] libmachine: (ha-744000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:26:59.040982    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:26:59.040991    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:26:59.041010    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:26:59.041033    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:26:59.041040    4448 main.go:141] libmachine: (ha-744000) DBG | Found match: 36:e3:93:ff:24:96
	I0917 10:26:59.041046    4448 main.go:141] libmachine: (ha-744000) DBG | IP: 192.169.0.5
	I0917 10:26:59.041079    4448 main.go:141] libmachine: (ha-744000) Calling .GetConfigRaw
	I0917 10:26:59.041673    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:26:59.041837    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:59.042200    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:26:59.042209    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:59.042313    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:26:59.042393    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:26:59.042497    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042594    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:26:59.042817    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:26:59.043033    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:26:59.043044    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:26:59.047101    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:26:59.098991    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:26:59.099689    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.099714    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.099723    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.099730    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.478495    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:26:59.478510    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:26:59.593167    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.593183    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.593195    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.593203    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.594075    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:26:59.594086    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:05.183473    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:05.183540    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:05.183555    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:05.208169    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:10.113996    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:10.114014    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114152    4448 buildroot.go:166] provisioning hostname "ha-744000"
	I0917 10:27:10.114163    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114266    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.114402    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.114494    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114584    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.114812    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.114997    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.115005    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000 && echo "ha-744000" | sudo tee /etc/hostname
	I0917 10:27:10.189969    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000
	
	I0917 10:27:10.189985    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.190121    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.190233    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190324    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190425    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.190562    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.190707    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.190718    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:10.253511    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:10.253531    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:10.253549    4448 buildroot.go:174] setting up certificates
	I0917 10:27:10.253555    4448 provision.go:84] configureAuth start
	I0917 10:27:10.253563    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.253694    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:10.253790    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.253930    4448 provision.go:143] copyHostCerts
	I0917 10:27:10.253971    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254039    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:10.254046    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254180    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:10.254370    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254409    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:10.254414    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254534    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:10.254684    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254722    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:10.254727    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254807    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:10.254980    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000 san=[127.0.0.1 192.169.0.5 ha-744000 localhost minikube]
	I0917 10:27:10.443647    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:10.443709    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:10.443745    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.444017    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.444217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.444311    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.444408    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:10.481724    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:10.481797    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:10.501694    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:10.501755    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 10:27:10.521451    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:10.521514    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 10:27:10.541883    4448 provision.go:87] duration metric: took 288.31459ms to configureAuth
	I0917 10:27:10.541895    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:10.542067    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:10.542085    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:10.542217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.542312    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.542387    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542467    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542559    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.542679    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.542806    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.542813    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:10.601508    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:10.601520    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:10.601615    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:10.601630    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.601764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.601865    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.601953    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.602043    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.602200    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.602343    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.602386    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:10.669944    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:10.669969    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.670102    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.670200    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670294    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670389    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.670510    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.670646    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.670658    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:12.369424    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:12.369438    4448 machine.go:96] duration metric: took 13.32714724s to provisionDockerMachine
	I0917 10:27:12.369451    4448 start.go:293] postStartSetup for "ha-744000" (driver="hyperkit")
	I0917 10:27:12.369463    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:12.369473    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.369675    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:12.369692    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.369803    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.369884    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.369975    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.370067    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.413317    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:12.417238    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:12.417272    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:12.417380    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:12.417569    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:12.417576    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:12.417788    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:12.427707    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:12.461431    4448 start.go:296] duration metric: took 91.970306ms for postStartSetup
	I0917 10:27:12.461460    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.461662    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:12.461675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.461764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.461863    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.461951    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.462049    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.498975    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:12.499039    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:12.553785    4448 fix.go:56] duration metric: took 13.704442272s for fixHost
	I0917 10:27:12.553808    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.553948    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.554064    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554158    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554243    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.554376    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:12.554528    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:12.554535    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:12.611703    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594032.650749132
	
	I0917 10:27:12.611715    4448 fix.go:216] guest clock: 1726594032.650749132
	I0917 10:27:12.611721    4448 fix.go:229] Guest: 2024-09-17 10:27:12.650749132 -0700 PDT Remote: 2024-09-17 10:27:12.553798 -0700 PDT m=+14.131667372 (delta=96.951132ms)
	I0917 10:27:12.611739    4448 fix.go:200] guest clock delta is within tolerance: 96.951132ms
	I0917 10:27:12.611750    4448 start.go:83] releasing machines lock for "ha-744000", held for 13.76244446s
	I0917 10:27:12.611768    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.611894    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:12.611995    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612340    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612438    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612522    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:12.612557    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612569    4448 ssh_runner.go:195] Run: cat /version.json
	I0917 10:27:12.612585    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612694    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612758    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612775    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612845    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612893    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612945    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.612977    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.648784    4448 ssh_runner.go:195] Run: systemctl --version
	I0917 10:27:12.693591    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:27:12.698718    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:12.698762    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:12.712125    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:12.712136    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.712235    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:12.730012    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:12.739057    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:12.747889    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:12.747935    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:12.757003    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.765797    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:12.774517    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.783400    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:12.792355    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:12.801214    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:12.810043    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:12.818991    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:12.826988    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:12.835075    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:12.932332    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:12.951203    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.951306    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:12.965837    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:12.981143    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:12.997816    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:13.008834    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.019726    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:13.047621    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.057914    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:13.072731    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:13.075778    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:13.083057    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:13.096420    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:13.190446    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:13.291417    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:13.291479    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:13.305208    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:13.405566    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:27:15.763788    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.358187677s)
	I0917 10:27:15.763854    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:27:15.774266    4448 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:27:15.786987    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:15.797461    4448 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:27:15.892958    4448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:27:15.992563    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.099704    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:27:16.113167    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:16.123851    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.230595    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:27:16.294806    4448 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:27:16.294898    4448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:27:16.300863    4448 start.go:563] Will wait 60s for crictl version
	I0917 10:27:16.300922    4448 ssh_runner.go:195] Run: which crictl
	I0917 10:27:16.304010    4448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:27:16.329606    4448 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:27:16.329710    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.346052    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.386748    4448 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:27:16.386784    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:16.387136    4448 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:27:16.390752    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.401571    4448 kubeadm.go:883] updating cluster {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 10:27:16.401664    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:16.401736    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.415872    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.415884    4448 docker.go:615] Images already preloaded, skipping extraction
	I0917 10:27:16.415970    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.427730    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.427747    4448 cache_images.go:84] Images are preloaded, skipping loading
	I0917 10:27:16.427754    4448 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 10:27:16.427829    4448 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:27:16.427915    4448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:27:16.463597    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:27:16.463611    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:27:16.463624    4448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:27:16.463640    4448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-744000 NodeName:ha-744000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:27:16.463730    4448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-744000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:27:16.463744    4448 kube-vip.go:115] generating kube-vip config ...
	I0917 10:27:16.463801    4448 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:27:16.478021    4448 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:27:16.478094    4448 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:27:16.478153    4448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:27:16.486558    4448 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:27:16.486616    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 10:27:16.494493    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 10:27:16.507997    4448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:27:16.521295    4448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 10:27:16.535199    4448 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:27:16.548668    4448 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:27:16.551530    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.561441    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.669349    4448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:27:16.684528    4448 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.5
	I0917 10:27:16.684541    4448 certs.go:194] generating shared ca certs ...
	I0917 10:27:16.684551    4448 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.684731    4448 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:27:16.684804    4448 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:27:16.684814    4448 certs.go:256] generating profile certs ...
	I0917 10:27:16.684905    4448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:27:16.684929    4448 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437
	I0917 10:27:16.684945    4448 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0917 10:27:16.754039    4448 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 ...
	I0917 10:27:16.754056    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437: {Name:mk79438fdb4dc3d525e8f682359147c957173c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754456    4448 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 ...
	I0917 10:27:16.754466    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437: {Name:mk6d911cd96357b3c3159c4d3a41f23afb7d4c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754680    4448 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt
	I0917 10:27:16.754895    4448 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key
	I0917 10:27:16.755149    4448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:27:16.755158    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:27:16.755205    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:27:16.755227    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:27:16.755246    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:27:16.755264    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:27:16.755283    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:27:16.755301    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:27:16.755318    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:27:16.755412    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:27:16.755459    4448 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:27:16.755467    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:27:16.755497    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:27:16.755530    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:27:16.755558    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:27:16.755623    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:16.755655    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:16.755675    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:27:16.755693    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:27:16.756123    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:27:16.777874    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:27:16.799280    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:27:16.827224    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:27:16.853838    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 10:27:16.907328    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:27:16.953101    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:27:16.997682    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:27:17.038330    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:27:17.061602    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:27:17.092949    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:27:17.123494    4448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:27:17.140334    4448 ssh_runner.go:195] Run: openssl version
	I0917 10:27:17.145978    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:27:17.156986    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161699    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161756    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.170341    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:27:17.187142    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:27:17.201375    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204789    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204832    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.208961    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:27:17.218128    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:27:17.227213    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230513    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230553    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.234703    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:27:17.243926    4448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:27:17.247354    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:27:17.251674    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:27:17.256090    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:27:17.260499    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:27:17.264702    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:27:17.268923    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:27:17.273119    4448 kubeadm.go:392] StartCluster: {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:27:17.273252    4448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:27:17.284758    4448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:27:17.293284    4448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:27:17.293296    4448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:27:17.293343    4448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:27:17.301434    4448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:27:17.301756    4448 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-744000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.301839    4448 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1558/kubeconfig needs updating (will repair): [kubeconfig missing "ha-744000" cluster setting kubeconfig missing "ha-744000" context setting]
	I0917 10:27:17.302016    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.302656    4448 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.302866    4448 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x4ad2720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:27:17.303186    4448 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 10:27:17.303370    4448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:27:17.311395    4448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 10:27:17.311410    4448 kubeadm.go:597] duration metric: took 18.109722ms to restartPrimaryControlPlane
	I0917 10:27:17.311416    4448 kubeadm.go:394] duration metric: took 38.30313ms to StartCluster
	I0917 10:27:17.311425    4448 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.311502    4448 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.311847    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.312074    4448 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:27:17.312086    4448 start.go:241] waiting for startup goroutines ...
	I0917 10:27:17.312098    4448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:27:17.312209    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.356558    4448 out.go:177] * Enabled addons: 
	I0917 10:27:17.377453    4448 addons.go:510] duration metric: took 65.359314ms for enable addons: enabled=[]
	I0917 10:27:17.377491    4448 start.go:246] waiting for cluster config update ...
	I0917 10:27:17.377508    4448 start.go:255] writing updated cluster config ...
	I0917 10:27:17.399517    4448 out.go:201] 
	I0917 10:27:17.421006    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.421153    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.443394    4448 out.go:177] * Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	I0917 10:27:17.485722    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:17.485786    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:27:17.485968    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:27:17.485986    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:27:17.486112    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.487099    4448 start.go:360] acquireMachinesLock for ha-744000-m02: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:27:17.487205    4448 start.go:364] duration metric: took 81.172µs to acquireMachinesLock for "ha-744000-m02"
	I0917 10:27:17.487235    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:27:17.487243    4448 fix.go:54] fixHost starting: m02
	I0917 10:27:17.487683    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:27:17.487720    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:27:17.497503    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52168
	I0917 10:27:17.498037    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:27:17.498462    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:27:17.498477    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:27:17.498776    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:27:17.499011    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.499112    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:27:17.499198    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.499265    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:27:17.500274    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.500290    4448 fix.go:112] recreateIfNeeded on ha-744000-m02: state=Stopped err=<nil>
	I0917 10:27:17.500304    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	W0917 10:27:17.500387    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:27:17.542418    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m02" ...
	I0917 10:27:17.563504    4448 main.go:141] libmachine: (ha-744000-m02) Calling .Start
	I0917 10:27:17.563707    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.563730    4448 main.go:141] libmachine: (ha-744000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid
	I0917 10:27:17.564875    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.564887    4448 main.go:141] libmachine: (ha-744000-m02) DBG | pid 4339 is in state "Stopped"
	I0917 10:27:17.564903    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid...
	I0917 10:27:17.565097    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Using UUID 84417734-d0f3-4fed-a88c-11fa06a6299e
	I0917 10:27:17.591233    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Generated MAC 72:92:6:7e:7d:92
	I0917 10:27:17.591269    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:27:17.591443    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591484    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591541    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "84417734-d0f3-4fed-a88c-11fa06a6299e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:27:17.591573    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 84417734-d0f3-4fed-a88c-11fa06a6299e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:27:17.591591    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:27:17.592872    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Pid is 4469
	I0917 10:27:17.593367    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Attempt 0
	I0917 10:27:17.593378    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.593408    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4469
	I0917 10:27:17.595062    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Searching for 72:92:6:7e:7d:92 in /var/db/dhcpd_leases ...
	I0917 10:27:17.595127    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:27:17.595146    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0d6c}
	I0917 10:27:17.595182    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:27:17.595200    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:27:17.595210    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetConfigRaw
	I0917 10:27:17.595213    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:27:17.595230    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found match: 72:92:6:7e:7d:92
	I0917 10:27:17.595241    4448 main.go:141] libmachine: (ha-744000-m02) DBG | IP: 192.169.0.6
	I0917 10:27:17.595879    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:17.596065    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.596597    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:27:17.596609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.596723    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:17.596804    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:17.596890    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597096    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:17.597227    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:17.597374    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:17.597383    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:27:17.600658    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:27:17.609248    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:27:17.610115    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:17.610129    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:17.610159    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:17.610179    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:17.995972    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:27:17.995987    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:27:18.110623    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:18.110642    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:18.110651    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:18.110657    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:18.111459    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:27:18.111468    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:23.703289    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:23.703415    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:23.703428    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:23.727083    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:28.668165    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:28.668207    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668348    4448 buildroot.go:166] provisioning hostname "ha-744000-m02"
	I0917 10:27:28.668359    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668445    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.668533    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.668618    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.668945    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.669097    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.669106    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m02 && echo "ha-744000-m02" | sudo tee /etc/hostname
	I0917 10:27:28.749259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m02
	
	I0917 10:27:28.749274    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.749405    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.749513    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749700    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.749847    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.749994    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.750009    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:28.821499    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:28.821514    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:28.821523    4448 buildroot.go:174] setting up certificates
	I0917 10:27:28.821528    4448 provision.go:84] configureAuth start
	I0917 10:27:28.821534    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.821669    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:28.821789    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.821885    4448 provision.go:143] copyHostCerts
	I0917 10:27:28.821910    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.821968    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:28.821973    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.822114    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:28.822315    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822354    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:28.822366    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822450    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:28.822596    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822635    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:28.822639    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822717    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:28.822857    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m02 san=[127.0.0.1 192.169.0.6 ha-744000-m02 localhost minikube]
	I0917 10:27:28.955024    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:28.955079    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:28.955094    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.955239    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.955341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.955430    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.955526    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:28.994909    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:28.994978    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:27:29.014096    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:29.014170    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:27:29.033197    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:29.033261    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:29.052129    4448 provision.go:87] duration metric: took 230.592645ms to configureAuth
	I0917 10:27:29.052147    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:29.052322    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:29.052336    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:29.052473    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.052573    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.052670    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052755    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052827    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.052942    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.053069    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.053076    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:29.116259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:29.116272    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:29.116365    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:29.116377    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.116506    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.116595    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116715    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116793    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.116936    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.117075    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.117118    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:29.192146    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:29.192170    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.192303    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.192391    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192497    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192577    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.192705    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.192844    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.192856    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:30.870717    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:30.870732    4448 machine.go:96] duration metric: took 13.274043119s to provisionDockerMachine
	I0917 10:27:30.870747    4448 start.go:293] postStartSetup for "ha-744000-m02" (driver="hyperkit")
	I0917 10:27:30.870755    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:30.870766    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.870980    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:30.870994    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.871125    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.871248    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.871341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.871432    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.914708    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:30.918099    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:30.918113    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:30.918212    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:30.918387    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:30.918394    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:30.918605    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:30.929083    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:30.958117    4448 start.go:296] duration metric: took 87.359751ms for postStartSetup
	I0917 10:27:30.958138    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.958316    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:30.958328    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.958426    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.958518    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.958597    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.958669    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.998754    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:30.998827    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:31.054686    4448 fix.go:56] duration metric: took 13.567353836s for fixHost
	I0917 10:27:31.054713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.054850    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.054939    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055014    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055085    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.055233    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:31.055380    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:31.055386    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:31.119216    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594051.159133703
	
	I0917 10:27:31.119227    4448 fix.go:216] guest clock: 1726594051.159133703
	I0917 10:27:31.119235    4448 fix.go:229] Guest: 2024-09-17 10:27:31.159133703 -0700 PDT Remote: 2024-09-17 10:27:31.054702 -0700 PDT m=+32.632454337 (delta=104.431703ms)
	I0917 10:27:31.119246    4448 fix.go:200] guest clock delta is within tolerance: 104.431703ms
	I0917 10:27:31.119250    4448 start.go:83] releasing machines lock for "ha-744000-m02", held for 13.631947572s
	I0917 10:27:31.119267    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.119393    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:31.143966    4448 out.go:177] * Found network options:
	I0917 10:27:31.164924    4448 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 10:27:31.185989    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.186029    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.186884    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187158    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187319    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:31.187368    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	W0917 10:27:31.187382    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.187491    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:27:31.187550    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.187616    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187796    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.187813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187986    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.188154    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188197    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:31.188284    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:27:31.224656    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:31.224727    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:31.272646    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:31.272663    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.272743    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.288486    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:31.297401    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:31.306736    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.306808    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:31.316018    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.325058    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:31.334512    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.343837    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:31.353242    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:31.362032    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:31.371387    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:31.380261    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:31.388512    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:31.396778    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.496690    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:31.515568    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.515642    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:31.540737    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.552945    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:31.572641    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.584129    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.595235    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:31.619571    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.631020    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.646195    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:31.649235    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:31.657206    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:31.670819    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:31.769091    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:31.876805    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.876827    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:31.890932    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.985803    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:28:33.019399    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.033193508s)
	I0917 10:28:33.019489    4448 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:28:33.055431    4448 out.go:201] 
	W0917 10:28:33.077249    4448 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:27:29 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.538749787Z" level=info msg="Starting up"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.539378325Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.541084999Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=490
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.558457504Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573199339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573220908Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573258162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573299725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573411020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573446242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573553666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573587921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573599847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573607195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573685739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573880273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575404717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575443775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575555494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575590640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575719071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575763589Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.577951289Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578038703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578076919Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578089302Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578157091Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578202689Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580641100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580726566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580738845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580747690Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580756580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580765114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580772643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580781164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580790542Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580798635Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580806480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580814346Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580832655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580847752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580858242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580866931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580879634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580890299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580898230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580906575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580914939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580923943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580931177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580940500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580948337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580963023Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580980668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580989498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580996636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581056206Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581091289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581104079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581113194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581120030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581133102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581145706Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581334956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581407817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581460834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581473448Z" level=info msg="containerd successfully booted in 0.023887s"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.569483774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.598149093Z" level=info msg="Loading containers: start."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.772640000Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.832682998Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.874141710Z" level=info msg="Loading containers: done."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885048604Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885231945Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907500544Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907671752Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:27:30 ha-744000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.038076014Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039237554Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:27:32 ha-744000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039672384Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039926596Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039966362Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:33 ha-744000-m02 dockerd[1165]: time="2024-09-17T17:27:33.083664420Z" level=info msg="Starting up"
	Sep 17 17:28:33 ha-744000-m02 dockerd[1165]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:28:33.077325    4448 out.go:270] * 
	W0917 10:28:33.078575    4448 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:28:33.141292    4448 out.go:201] 
	
	
	==> Docker <==
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.364009200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982686773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982795889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982809691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982891719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908438866Z" level=info msg="shim disconnected" id=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908495753Z" level=warning msg="cleaning up after shim disconnected" id=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908504694Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1178]: time="2024-09-17T17:28:15.909053440Z" level=info msg="ignoring event" container=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.924890203Z" level=info msg="shim disconnected" id=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.925281000Z" level=warning msg="cleaning up after shim disconnected" id=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.925315687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1178]: time="2024-09-17T17:28:26.926104549Z" level=info msg="ignoring event" container=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981215245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981300627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981313170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981748827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988154215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988302802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988330908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988447275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:56 ha-744000 dockerd[1184]: time="2024-09-17T17:28:56.389051429Z" level=info msg="shim disconnected" id=b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963 namespace=moby
	Sep 17 17:28:56 ha-744000 dockerd[1184]: time="2024-09-17T17:28:56.389386460Z" level=warning msg="cleaning up after shim disconnected" id=b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963 namespace=moby
	Sep 17 17:28:56 ha-744000 dockerd[1184]: time="2024-09-17T17:28:56.389462873Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:56 ha-744000 dockerd[1178]: time="2024-09-17T17:28:56.389846849Z" level=info msg="ignoring event" container=b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3757e12da538a       175ffd71cce3d       13 seconds ago       Running             kube-controller-manager   5                   ac5039c087055       kube-controller-manager-ha-744000
	b526083efb4fc       6bab7719df100       24 seconds ago       Exited              kube-apiserver            4                   049299c96bb2c       kube-apiserver-ha-744000
	6b1d67e1da594       175ffd71cce3d       55 seconds ago       Exited              kube-controller-manager   4                   ac5039c087055       kube-controller-manager-ha-744000
	bbf0d2ebe5c6c       9aa1fad941575       About a minute ago   Running             kube-scheduler            2                   339a7c29b977e       kube-scheduler-ha-744000
	1e359ca4a791e       2e96e5913fc06       About a minute ago   Running             etcd                      2                   bf723b1d8bf7c       etcd-ha-744000
	6df162190be2a       38af8ddebf499       About a minute ago   Running             kube-vip                  1                   026314418eb78       kube-vip-ha-744000
	1b95d7a1c7708       6e38f40d628db       3 minutes ago        Exited              storage-provisioner       2                   375cde06a4bcf       storage-provisioner
	079da006755a7       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   f0eee6e67fe42       busybox-7dff88458-cn52t
	9f76145e8eaf7       12968670680f4       4 minutes ago        Exited              kindnet-cni               1                   8b4b5191649e7       kindnet-c59lr
	6a4aba3acb1e9       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   3888ce04e78db       coredns-7c65d6cfc9-khnlh
	fb8b83fe49a6e       60c005f310ff3       4 minutes ago        Exited              kube-proxy                1                   f1782d63db94f       kube-proxy-6xd2h
	24cfd031ec879       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   244f5bc456efc       coredns-7c65d6cfc9-j9jcc
	cfbfd57cf2b56       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   433c480eea542       kube-vip-ha-744000
	a7645ef2ae8dd       9aa1fad941575       5 minutes ago        Exited              kube-scheduler            1                   fbf79ae31cbab       kube-scheduler-ha-744000
	23a7e0d95a77c       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   55cb3d05ddf34       etcd-ha-744000
	
	
	==> coredns [24cfd031ec87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52682 - 33898 "HINFO IN 2709939145458862568.721558315158165230. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009931439s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[318103159]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.683) (total time: 30003ms):
	Trace[318103159]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:24:50.686)
	Trace[318103159]: [30.003131559s] [30.003131559s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1979128092]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1979128092]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1979128092]: [30.000652416s] [30.000652416s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1978210991]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1978210991]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1978210991]: [30.000766886s] [30.000766886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a4aba3acb1e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60360 - 19575 "HINFO IN 3607648931521447410.3411894034218696920. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009401347s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1960564509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1960564509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.746)
	Trace[1960564509]: [30.00213331s] [30.00213331s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1197674287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1197674287]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[1197674287]: [30.002759704s] [30.002759704s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[633118280]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30003ms):
	Trace[633118280]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[633118280]: [30.003193097s] [30.003193097s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0917 17:29:00.275114    3208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:29:00.276743    3208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:29:00.278269    3208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:29:00.279767    3208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:29:00.281342    3208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035209] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007985] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[Sep17 17:27] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006963] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.845078] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.235754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000048] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.478686] systemd-fstab-generator[466]: Ignoring "noauto" option for root device
	[  +0.092656] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +2.006519] systemd-fstab-generator[1106]: Ignoring "noauto" option for root device
	[  +0.259762] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.049883] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.051714] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.112681] systemd-fstab-generator[1170]: Ignoring "noauto" option for root device
	[  +2.485271] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	[  +0.103516] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.100618] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.134329] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.431436] systemd-fstab-generator[1594]: Ignoring "noauto" option for root device
	[  +6.580361] kauditd_printk_skb: 212 callbacks suppressed
	[ +21.488197] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [1e359ca4a791] <==
	{"level":"warn","ts":"2024-09-17T17:28:56.067995Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:56.573863Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:57.076627Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T17:28:57.095677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:57.095768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:57.095787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:57.095804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:28:57.577216Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:58.078131Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:58.578528Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T17:28:58.595884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:58.596021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:58.596042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:58.596061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:28:59.078913Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:59.263482Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"429e60237c9af887","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:28:59.263606Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"429e60237c9af887","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:28:59.550788Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-17T17:28:59.550978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000986204s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-17T17:28:59.551037Z","caller":"traceutil/trace.go:171","msg":"trace[1448586196] range","detail":"{range_begin:; range_end:; }","duration":"7.001056931s","start":"2024-09-17T17:28:52.549966Z","end":"2024-09-17T17:28:59.551023Z","steps":["trace[1448586196] 'agreement among raft nodes before linearized reading'  (duration: 7.000983297s)"],"step_count":1}
	{"level":"error","ts":"2024-09-17T17:28:59.551118Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-17T17:29:00.095087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:00.095166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:00.095179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:00.095190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	
	
	==> etcd [23a7e0d95a77] <==
	{"level":"warn","ts":"2024-09-17T17:26:50.587150Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.962871734s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.169.0.5\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587161Z","caller":"traceutil/trace.go:171","msg":"trace[618307594] range","detail":"{range_begin:/registry/masterleases/192.169.0.5; range_end:; }","duration":"6.962884303s","start":"2024-09-17T17:26:43.624274Z","end":"2024-09-17T17:26:50.587158Z","steps":["trace[618307594] 'agreement among raft nodes before linearized reading'  (duration: 6.96287178s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587171Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:43.624238Z","time spent":"6.962930406s","remote":"127.0.0.1:50532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":0,"request content":"key:\"/registry/masterleases/192.169.0.5\" "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.551739854s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587269Z","caller":"traceutil/trace.go:171","msg":"trace[474401785] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"1.551753744s","start":"2024-09-17T17:26:49.035511Z","end":"2024-09-17T17:26:50.587265Z","steps":["trace[474401785] 'agreement among raft nodes before linearized reading'  (duration: 1.551739815s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587280Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:49.035495Z","time spent":"1.551781157s","remote":"127.0.0.1:50648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.571949422s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587333Z","caller":"traceutil/trace.go:171","msg":"trace[779412434] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"3.571960909s","start":"2024-09-17T17:26:47.015370Z","end":"2024-09-17T17:26:50.587331Z","steps":["trace[779412434] 'agreement among raft nodes before linearized reading'  (duration: 3.571949266s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587344Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:47.015364Z","time spent":"3.571976754s","remote":"127.0.0.1:50872","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:45.985835Z","time spent":"4.601799065s","remote":"127.0.0.1:50768","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-17T17:26:50.686768Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T17:26:50.686883Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686894Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686906Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686956Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686981Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.687003Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.687012Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.698284Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:26:50.698463Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:26:50.698473Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-744000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:29:00 up 2 min,  0 users,  load average: 0.13, 0.10, 0.04
	Linux ha-744000 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9f76145e8eaf] <==
	I0917 17:26:11.511367       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:11.512152       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:11.512248       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:11.512772       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:11.512871       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504250       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:21.504302       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504625       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:21.504682       1 main.go:299] handling current node
	I0917 17:26:21.504706       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:21.504715       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:21.504816       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:21.504869       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:31.506309       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:31.506431       1 main.go:299] handling current node
	I0917 17:26:31.506449       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:31.506462       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:31.506621       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:31.506656       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:41.505932       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:41.506052       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:41.506553       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:41.506833       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:41.507226       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:41.507357       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b526083efb4f] <==
	I0917 17:28:36.086684       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 17:28:36.088246       1 server.go:142] Version: v1.31.1
	I0917 17:28:36.088285       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:36.354102       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 17:28:36.357696       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:28:36.370175       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 17:28:36.370322       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 17:28:36.370574       1 instance.go:232] Using reconciler: lease
	W0917 17:28:56.356155       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 17:28:56.356428       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 17:28:56.372981       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 17:28:56.373006       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3757e12da538] <==
	I0917 17:28:47.524594       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:47.708212       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:47.708245       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:47.709288       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:28:47.709434       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:47.709442       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:47.709457       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [6b1d67e1da59] <==
	I0917 17:28:05.497749       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:06.034875       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:06.034965       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:06.036148       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:06.036157       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:28:06.036166       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:06.036173       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 17:28:26.901132       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [fb8b83fe49a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:24:21.123827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:24:21.146583       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 17:24:21.146876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:24:21.179243       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:24:21.179464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:24:21.179596       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:24:21.183190       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:24:21.184459       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:24:21.184543       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:24:21.188244       1 config.go:199] "Starting service config controller"
	I0917 17:24:21.188350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:24:21.188588       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:24:21.188659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:24:21.192108       1 config.go:328] "Starting node config controller"
	I0917 17:24:21.192216       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:24:21.289888       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:24:21.289903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:24:21.293411       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7645ef2ae8d] <==
	E0917 17:23:52.361916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.361961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:23:52.361995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:23:52.362165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:23:52.362240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:23:52.362314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:23:52.362490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:23:52.362567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:23:52.362640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:23:52.362799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 17:23:53.372962       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 17:26:50.603688       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bbf0d2ebe5c6] <==
	E0917 17:28:27.780999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:27.954747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:27.954795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:29.812244       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:29.812295       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:31.899209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:31.899308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:32.373782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:32.373902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:35.010233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:35.010333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:46.379121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:46.379226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:47.366426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:47.366523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:48.382767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:48.383125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:49.591786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:49.592123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:49.647843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:49.648127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:50.257456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:50.257486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:59.608489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:59.608581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 17 17:28:40 ha-744000 kubelet[1601]: I0917 17:28:40.664516    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:42 ha-744000 kubelet[1601]: E0917 17:28:42.873716    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:28:42 ha-744000 kubelet[1601]: E0917 17:28:42.873771    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:28:45 ha-744000 kubelet[1601]: W0917 17:28:45.945277    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:45 ha-744000 kubelet[1601]: E0917 17:28:45.945790    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:46 ha-744000 kubelet[1601]: I0917 17:28:46.944782    1601 scope.go:117] "RemoveContainer" containerID="6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721"
	Sep 17 17:28:46 ha-744000 kubelet[1601]: E0917 17:28:46.945144    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:49 ha-744000 kubelet[1601]: E0917 17:28:49.017860    1601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-744000.17f61820eeb0604a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-744000,UID:ha-744000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-744000,},FirstTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,LastTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-744000,}"
	Sep 17 17:28:49 ha-744000 kubelet[1601]: I0917 17:28:49.875548    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:52 ha-744000 kubelet[1601]: E0917 17:28:52.089297    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:28:52 ha-744000 kubelet[1601]: E0917 17:28:52.090072    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:28:56 ha-744000 kubelet[1601]: E0917 17:28:56.945624    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: I0917 17:28:57.244872    1601 scope.go:117] "RemoveContainer" containerID="66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: I0917 17:28:57.245459    1601 scope.go:117] "RemoveContainer" containerID="b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: E0917 17:28:57.245556    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-744000_kube-system(5122b3c5b6b107f6a71d263fb9595f1e)\"" pod="kube-system/kube-apiserver-ha-744000" podUID="5122b3c5b6b107f6a71d263fb9595f1e"
	Sep 17 17:28:58 ha-744000 kubelet[1601]: W0917 17:28:58.233904    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:58 ha-744000 kubelet[1601]: E0917 17:28:58.234030    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:58 ha-744000 kubelet[1601]: W0917 17:28:58.233904    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-744000&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:58 ha-744000 kubelet[1601]: E0917 17:28:58.234177    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-744000&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:59 ha-744000 kubelet[1601]: I0917 17:28:59.091390    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:59 ha-744000 kubelet[1601]: I0917 17:28:59.411014    1601 scope.go:117] "RemoveContainer" containerID="b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963"
	Sep 17 17:28:59 ha-744000 kubelet[1601]: E0917 17:28:59.411150    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-744000_kube-system(5122b3c5b6b107f6a71d263fb9595f1e)\"" pod="kube-system/kube-apiserver-ha-744000" podUID="5122b3c5b6b107f6a71d263fb9595f1e"
	Sep 17 17:29:01 ha-744000 kubelet[1601]: E0917 17:29:01.305083    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:29:01 ha-744000 kubelet[1601]: E0917 17:29:01.305162    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:29:01 ha-744000 kubelet[1601]: E0917 17:29:01.305201    1601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-744000.17f61820eeb0604a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-744000,UID:ha-744000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-744000,},FirstTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,LastTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-744000,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000: exit status 2 (148.008565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-744000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (2.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-744000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-744000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-744000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-744000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\
":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\
":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMet
rics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-744000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-744000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-744000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-744000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\"
:false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false
,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-744000 -n ha-744000: exit status 2 (147.290529ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 logs -n 25: (2.256210887s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m04 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp testdata/cp-test.txt                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000:/home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000 sudo cat                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m02:/home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m02 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m03:/home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | ha-744000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-744000 ssh -n ha-744000-m03 sudo cat                                                                                      | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | /home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-744000 node stop m02 -v=7                                                                                                 | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:21 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-744000 node start m02 -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:21 PDT | 17 Sep 24 10:22 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000 -v=7                                                                                                       | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-744000 -v=7                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:22 PDT | 17 Sep 24 10:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true -v=7                                                                                                | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:23 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-744000                                                                                                            | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	| node    | ha-744000 node delete m03 -v=7                                                                                               | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-744000 stop -v=7                                                                                                          | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT | 17 Sep 24 10:26 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-744000 --wait=true                                                                                                     | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:26 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-744000                                                                                                             | ha-744000 | jenkins | v1.34.0 | 17 Sep 24 10:28 PDT |                     |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 10:26:58
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 10:26:58.457695    4448 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:26:58.457869    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.457875    4448 out.go:358] Setting ErrFile to fd 2...
	I0917 10:26:58.457878    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.458048    4448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:26:58.459431    4448 out.go:352] Setting JSON to false
	I0917 10:26:58.481798    4448 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3385,"bootTime":1726590633,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:26:58.481949    4448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:26:58.503960    4448 out.go:177] * [ha-744000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:26:58.546841    4448 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:26:58.546875    4448 notify.go:220] Checking for updates...
	I0917 10:26:58.589550    4448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:26:58.610683    4448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:26:58.631667    4448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:26:58.652583    4448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:26:58.673667    4448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:26:58.695561    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:26:58.696255    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.696327    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.705884    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52142
	I0917 10:26:58.706304    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.706746    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.706764    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.707014    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.707146    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.707350    4448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:26:58.707601    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.707628    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.716185    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 10:26:58.716537    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.716881    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.716897    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.717100    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.717222    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.745596    4448 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:26:58.787571    4448 start.go:297] selected driver: hyperkit
	I0917 10:26:58.787600    4448 start.go:901] validating driver "hyperkit" against &{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.787838    4448 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:26:58.788024    4448 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.788251    4448 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:26:58.797793    4448 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:26:58.801784    4448 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.801808    4448 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:26:58.804449    4448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:26:58.804489    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:26:58.804523    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:26:58.804589    4448 start.go:340] cluster config:
	{Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:26:58.804704    4448 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:26:58.826385    4448 out.go:177] * Starting "ha-744000" primary control-plane node in "ha-744000" cluster
	I0917 10:26:58.847617    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:26:58.847686    4448 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 10:26:58.847716    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:26:58.847928    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:26:58.847948    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:26:58.848103    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:58.849030    4448 start.go:360] acquireMachinesLock for ha-744000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:26:58.849203    4448 start.go:364] duration metric: took 147.892µs to acquireMachinesLock for "ha-744000"
	I0917 10:26:58.849244    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:26:58.849261    4448 fix.go:54] fixHost starting: 
	I0917 10:26:58.849685    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.849713    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.858847    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52146
	I0917 10:26:58.859214    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.859547    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.859558    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.859809    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.859941    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:58.860044    4448 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:26:58.860131    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.860222    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:26:58.861252    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.861281    4448 fix.go:112] recreateIfNeeded on ha-744000: state=Stopped err=<nil>
	I0917 10:26:58.861296    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	W0917 10:26:58.861379    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:26:58.903396    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000" ...
	I0917 10:26:58.924477    4448 main.go:141] libmachine: (ha-744000) Calling .Start
	I0917 10:26:58.924739    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.924805    4448 main.go:141] libmachine: (ha-744000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid
	I0917 10:26:58.926818    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.926830    4448 main.go:141] libmachine: (ha-744000) DBG | pid 4331 is in state "Stopped"
	I0917 10:26:58.926844    4448 main.go:141] libmachine: (ha-744000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid...
	I0917 10:26:58.927183    4448 main.go:141] libmachine: (ha-744000) DBG | Using UUID bcb5b96f-4d12-41bd-81db-c015832629bb
	I0917 10:26:59.037116    4448 main.go:141] libmachine: (ha-744000) DBG | Generated MAC 36:e3:93:ff:24:96
	I0917 10:26:59.037141    4448 main.go:141] libmachine: (ha-744000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:26:59.037239    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037264    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bcb5b96f-4d12-41bd-81db-c015832629bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cfe60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:26:59.037302    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bcb5b96f-4d12-41bd-81db-c015832629bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:26:59.037345    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bcb5b96f-4d12-41bd-81db-c015832629bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/ha-744000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:26:59.037367    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:26:59.039007    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 DEBUG: hyperkit: Pid is 4462
	I0917 10:26:59.039387    4448 main.go:141] libmachine: (ha-744000) DBG | Attempt 0
	I0917 10:26:59.039405    4448 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:59.039460    4448 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4462
	I0917 10:26:59.040899    4448 main.go:141] libmachine: (ha-744000) DBG | Searching for 36:e3:93:ff:24:96 in /var/db/dhcpd_leases ...
	I0917 10:26:59.040968    4448 main.go:141] libmachine: (ha-744000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:26:59.040982    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:26:59.040991    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:26:59.041010    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:26:59.041033    4448 main.go:141] libmachine: (ha-744000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0c82}
	I0917 10:26:59.041040    4448 main.go:141] libmachine: (ha-744000) DBG | Found match: 36:e3:93:ff:24:96
	I0917 10:26:59.041046    4448 main.go:141] libmachine: (ha-744000) DBG | IP: 192.169.0.5
	I0917 10:26:59.041079    4448 main.go:141] libmachine: (ha-744000) Calling .GetConfigRaw
	I0917 10:26:59.041673    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:26:59.041837    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:26:59.042200    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:26:59.042209    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:26:59.042313    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:26:59.042393    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:26:59.042497    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042594    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:26:59.042683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:26:59.042817    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:26:59.043033    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:26:59.043044    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:26:59.047101    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:26:59.098991    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:26:59.099689    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.099714    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.099723    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.099730    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.478495    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:26:59.478510    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:26:59.593167    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:26:59.593183    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:26:59.593195    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:26:59.593203    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:26:59.594075    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:26:59.594086    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:26:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:05.183473    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:05.183540    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:05.183555    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:05.208169    4448 main.go:141] libmachine: (ha-744000) DBG | 2024/09/17 10:27:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:10.113996    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:10.114014    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114152    4448 buildroot.go:166] provisioning hostname "ha-744000"
	I0917 10:27:10.114163    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.114266    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.114402    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.114494    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114584    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.114683    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.114812    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.114997    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.115005    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000 && echo "ha-744000" | sudo tee /etc/hostname
	I0917 10:27:10.189969    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000
	
	I0917 10:27:10.189985    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.190121    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.190233    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190324    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.190425    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.190562    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.190707    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.190718    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:10.253511    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:10.253531    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:10.253549    4448 buildroot.go:174] setting up certificates
	I0917 10:27:10.253555    4448 provision.go:84] configureAuth start
	I0917 10:27:10.253563    4448 main.go:141] libmachine: (ha-744000) Calling .GetMachineName
	I0917 10:27:10.253694    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:10.253790    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.253930    4448 provision.go:143] copyHostCerts
	I0917 10:27:10.253971    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254039    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:10.254046    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:10.254180    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:10.254370    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254409    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:10.254414    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:10.254534    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:10.254684    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254722    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:10.254727    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:10.254807    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:10.254980    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000 san=[127.0.0.1 192.169.0.5 ha-744000 localhost minikube]
	I0917 10:27:10.443647    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:10.443709    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:10.443745    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.444017    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.444217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.444311    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.444408    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:10.481724    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:10.481797    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:10.501694    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:10.501755    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 10:27:10.521451    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:10.521514    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 10:27:10.541883    4448 provision.go:87] duration metric: took 288.31459ms to configureAuth
	I0917 10:27:10.541895    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:10.542067    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:10.542085    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:10.542217    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.542312    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.542387    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542467    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.542559    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.542679    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.542806    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.542813    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:10.601508    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:10.601520    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:10.601615    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:10.601630    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.601764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.601865    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.601953    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.602043    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.602200    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.602343    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.602386    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:10.669944    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:10.669969    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:10.670102    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:10.670200    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670294    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:10.670389    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:10.670510    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:10.670646    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:10.670658    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:12.369424    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:12.369438    4448 machine.go:96] duration metric: took 13.32714724s to provisionDockerMachine
	I0917 10:27:12.369451    4448 start.go:293] postStartSetup for "ha-744000" (driver="hyperkit")
	I0917 10:27:12.369463    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:12.369473    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.369675    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:12.369692    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.369803    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.369884    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.369975    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.370067    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.413317    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:12.417238    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:12.417272    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:12.417380    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:12.417569    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:12.417576    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:12.417788    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:12.427707    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:12.461431    4448 start.go:296] duration metric: took 91.970306ms for postStartSetup
	I0917 10:27:12.461460    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.461662    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:12.461675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.461764    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.461863    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.461951    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.462049    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.498975    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:12.499039    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:12.553785    4448 fix.go:56] duration metric: took 13.704442272s for fixHost
	I0917 10:27:12.553808    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.553948    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.554064    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554158    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.554243    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.554376    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:12.554528    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 10:27:12.554535    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:12.611703    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594032.650749132
	
	I0917 10:27:12.611715    4448 fix.go:216] guest clock: 1726594032.650749132
	I0917 10:27:12.611721    4448 fix.go:229] Guest: 2024-09-17 10:27:12.650749132 -0700 PDT Remote: 2024-09-17 10:27:12.553798 -0700 PDT m=+14.131667372 (delta=96.951132ms)
	I0917 10:27:12.611739    4448 fix.go:200] guest clock delta is within tolerance: 96.951132ms
	I0917 10:27:12.611750    4448 start.go:83] releasing machines lock for "ha-744000", held for 13.76244446s
	I0917 10:27:12.611768    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.611894    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:12.611995    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612340    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612438    4448 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:27:12.612522    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:12.612557    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612569    4448 ssh_runner.go:195] Run: cat /version.json
	I0917 10:27:12.612585    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:27:12.612675    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612694    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:27:12.612758    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612775    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:27:12.612845    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612893    4448 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:27:12.612945    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.612977    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:27:12.648784    4448 ssh_runner.go:195] Run: systemctl --version
	I0917 10:27:12.693591    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:27:12.698718    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:12.698762    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:12.712125    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:12.712136    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.712235    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:12.730012    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:12.739057    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:12.747889    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:12.747935    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:12.757003    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.765797    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:12.774517    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:12.783400    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:12.792355    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:12.801214    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:12.810043    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:12.818991    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:12.826988    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:12.835075    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:12.932332    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:12.951203    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:12.951306    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:12.965837    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:12.981143    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:12.997816    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:13.008834    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.019726    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:13.047621    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:13.057914    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:13.072731    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:13.075778    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:13.083057    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:13.096420    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:13.190446    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:13.291417    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:13.291479    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:13.305208    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:13.405566    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:27:15.763788    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.358187677s)
	I0917 10:27:15.763854    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 10:27:15.774266    4448 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 10:27:15.786987    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:15.797461    4448 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 10:27:15.892958    4448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 10:27:15.992563    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.099704    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 10:27:16.113167    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 10:27:16.123851    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.230595    4448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 10:27:16.294806    4448 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 10:27:16.294898    4448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 10:27:16.300863    4448 start.go:563] Will wait 60s for crictl version
	I0917 10:27:16.300922    4448 ssh_runner.go:195] Run: which crictl
	I0917 10:27:16.304010    4448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 10:27:16.329606    4448 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 10:27:16.329710    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.346052    4448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 10:27:16.386748    4448 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 10:27:16.386784    4448 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:27:16.387136    4448 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 10:27:16.390752    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.401571    4448 kubeadm.go:883] updating cluster {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 10:27:16.401664    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:16.401736    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.415872    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.415884    4448 docker.go:615] Images already preloaded, skipping extraction
	I0917 10:27:16.415970    4448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 10:27:16.427730    4448 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 10:27:16.427747    4448 cache_images.go:84] Images are preloaded, skipping loading
	I0917 10:27:16.427754    4448 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 10:27:16.427829    4448 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-744000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 10:27:16.427915    4448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 10:27:16.463597    4448 cni.go:84] Creating CNI manager for ""
	I0917 10:27:16.463611    4448 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 10:27:16.463624    4448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 10:27:16.463640    4448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-744000 NodeName:ha-744000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 10:27:16.463730    4448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-744000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 10:27:16.463744    4448 kube-vip.go:115] generating kube-vip config ...
	I0917 10:27:16.463801    4448 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 10:27:16.478021    4448 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 10:27:16.478094    4448 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 10:27:16.478153    4448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 10:27:16.486558    4448 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 10:27:16.486616    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 10:27:16.494493    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 10:27:16.507997    4448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 10:27:16.521295    4448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 10:27:16.535199    4448 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 10:27:16.548668    4448 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 10:27:16.551530    4448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 10:27:16.561441    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:16.669349    4448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 10:27:16.684528    4448 certs.go:68] Setting up /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000 for IP: 192.169.0.5
	I0917 10:27:16.684541    4448 certs.go:194] generating shared ca certs ...
	I0917 10:27:16.684551    4448 certs.go:226] acquiring lock for ca certs: {Name:mkf125882918ae047e70a2a13fee9f5c6e85700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.684731    4448 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key
	I0917 10:27:16.684804    4448 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key
	I0917 10:27:16.684814    4448 certs.go:256] generating profile certs ...
	I0917 10:27:16.684905    4448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key
	I0917 10:27:16.684929    4448 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437
	I0917 10:27:16.684945    4448 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0917 10:27:16.754039    4448 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 ...
	I0917 10:27:16.754056    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437: {Name:mk79438fdb4dc3d525e8f682359147c957173c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754456    4448 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 ...
	I0917 10:27:16.754466    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437: {Name:mk6d911cd96357b3c3159c4d3a41f23afb7d4c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:16.754680    4448 certs.go:381] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt
	I0917 10:27:16.754895    4448 certs.go:385] copying /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key.b792d437 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key
	I0917 10:27:16.755149    4448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key
	I0917 10:27:16.755158    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 10:27:16.755205    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 10:27:16.755227    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 10:27:16.755246    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 10:27:16.755264    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 10:27:16.755283    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 10:27:16.755301    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 10:27:16.755318    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 10:27:16.755412    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem (1338 bytes)
	W0917 10:27:16.755459    4448 certs.go:480] ignoring /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121_empty.pem, impossibly tiny 0 bytes
	I0917 10:27:16.755467    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 10:27:16.755497    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem (1078 bytes)
	I0917 10:27:16.755530    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem (1123 bytes)
	I0917 10:27:16.755558    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem (1675 bytes)
	I0917 10:27:16.755623    4448 certs.go:484] found cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:16.755655    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:16.755675    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem -> /usr/share/ca-certificates/2121.pem
	I0917 10:27:16.755693    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /usr/share/ca-certificates/21212.pem
	I0917 10:27:16.756123    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 10:27:16.777874    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 10:27:16.799280    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 10:27:16.827224    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 10:27:16.853838    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 10:27:16.907328    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 10:27:16.953101    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 10:27:16.997682    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 10:27:17.038330    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 10:27:17.061602    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/2121.pem --> /usr/share/ca-certificates/2121.pem (1338 bytes)
	I0917 10:27:17.092949    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /usr/share/ca-certificates/21212.pem (1708 bytes)
	I0917 10:27:17.123494    4448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 10:27:17.140334    4448 ssh_runner.go:195] Run: openssl version
	I0917 10:27:17.145978    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 10:27:17.156986    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161699    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.161756    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 10:27:17.170341    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 10:27:17.187142    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121.pem && ln -fs /usr/share/ca-certificates/2121.pem /etc/ssl/certs/2121.pem"
	I0917 10:27:17.201375    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204789    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.204832    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121.pem
	I0917 10:27:17.208961    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2121.pem /etc/ssl/certs/51391683.0"
	I0917 10:27:17.218128    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21212.pem && ln -fs /usr/share/ca-certificates/21212.pem /etc/ssl/certs/21212.pem"
	I0917 10:27:17.227213    4448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230513    4448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.230553    4448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21212.pem
	I0917 10:27:17.234703    4448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21212.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 10:27:17.243926    4448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 10:27:17.247354    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 10:27:17.251674    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 10:27:17.256090    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 10:27:17.260499    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 10:27:17.264702    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 10:27:17.268923    4448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 10:27:17.273119    4448 kubeadm.go:392] StartCluster: {Name:ha-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-744000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:27:17.273252    4448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 10:27:17.284758    4448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 10:27:17.293284    4448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 10:27:17.293296    4448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 10:27:17.293343    4448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 10:27:17.301434    4448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:27:17.301756    4448 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-744000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.301839    4448 kubeconfig.go:62] /Users/jenkins/minikube-integration/19662-1558/kubeconfig needs updating (will repair): [kubeconfig missing "ha-744000" cluster setting kubeconfig missing "ha-744000" context setting]
	I0917 10:27:17.302016    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.302656    4448 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.302866    4448 kapi.go:59] client config for ha-744000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/client.key", CAFile:"/Users/jenkins/minikube-integration/19662-1558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x4ad2720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 10:27:17.303186    4448 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 10:27:17.303370    4448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 10:27:17.311395    4448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 10:27:17.311410    4448 kubeadm.go:597] duration metric: took 18.109722ms to restartPrimaryControlPlane
	I0917 10:27:17.311416    4448 kubeadm.go:394] duration metric: took 38.30313ms to StartCluster
	I0917 10:27:17.311425    4448 settings.go:142] acquiring lock: {Name:mkbfad4c3b08cc53a3f164d824f2d3740891fac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.311502    4448 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:27:17.311847    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/kubeconfig: {Name:mk45a7c4195a5b41f1a76242a014d6d35669d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 10:27:17.312074    4448 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 10:27:17.312086    4448 start.go:241] waiting for startup goroutines ...
	I0917 10:27:17.312098    4448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 10:27:17.312209    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.356558    4448 out.go:177] * Enabled addons: 
	I0917 10:27:17.377453    4448 addons.go:510] duration metric: took 65.359314ms for enable addons: enabled=[]
	I0917 10:27:17.377491    4448 start.go:246] waiting for cluster config update ...
	I0917 10:27:17.377508    4448 start.go:255] writing updated cluster config ...
	I0917 10:27:17.399517    4448 out.go:201] 
	I0917 10:27:17.421006    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:17.421153    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.443394    4448 out.go:177] * Starting "ha-744000-m02" control-plane node in "ha-744000" cluster
	I0917 10:27:17.485722    4448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 10:27:17.485786    4448 cache.go:56] Caching tarball of preloaded images
	I0917 10:27:17.485968    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 10:27:17.485986    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 10:27:17.486112    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.487099    4448 start.go:360] acquireMachinesLock for ha-744000-m02: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:27:17.487205    4448 start.go:364] duration metric: took 81.172µs to acquireMachinesLock for "ha-744000-m02"
	I0917 10:27:17.487235    4448 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:27:17.487243    4448 fix.go:54] fixHost starting: m02
	I0917 10:27:17.487683    4448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:27:17.487720    4448 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:27:17.497503    4448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52168
	I0917 10:27:17.498037    4448 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:27:17.498462    4448 main.go:141] libmachine: Using API Version  1
	I0917 10:27:17.498477    4448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:27:17.498776    4448 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:27:17.499011    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.499112    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:27:17.499198    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.499265    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:27:17.500274    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.500290    4448 fix.go:112] recreateIfNeeded on ha-744000-m02: state=Stopped err=<nil>
	I0917 10:27:17.500304    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	W0917 10:27:17.500387    4448 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:27:17.542418    4448 out.go:177] * Restarting existing hyperkit VM for "ha-744000-m02" ...
	I0917 10:27:17.563504    4448 main.go:141] libmachine: (ha-744000-m02) Calling .Start
	I0917 10:27:17.563707    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.563730    4448 main.go:141] libmachine: (ha-744000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid
	I0917 10:27:17.564875    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:27:17.564887    4448 main.go:141] libmachine: (ha-744000-m02) DBG | pid 4339 is in state "Stopped"
	I0917 10:27:17.564903    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid...
	I0917 10:27:17.565097    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Using UUID 84417734-d0f3-4fed-a88c-11fa06a6299e
	I0917 10:27:17.591233    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Generated MAC 72:92:6:7e:7d:92
	I0917 10:27:17.591269    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000
	I0917 10:27:17.591443    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591484    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"84417734-d0f3-4fed-a88c-11fa06a6299e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbec0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 10:27:17.591541    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "84417734-d0f3-4fed-a88c-11fa06a6299e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machine
s/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"}
	I0917 10:27:17.591573    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 84417734-d0f3-4fed-a88c-11fa06a6299e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/ha-744000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-744000"
	I0917 10:27:17.591591    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:27:17.592872    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 DEBUG: hyperkit: Pid is 4469
	I0917 10:27:17.593367    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Attempt 0
	I0917 10:27:17.593378    4448 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:27:17.593408    4448 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4469
	I0917 10:27:17.595062    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Searching for 72:92:6:7e:7d:92 in /var/db/dhcpd_leases ...
	I0917 10:27:17.595127    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 10:27:17.595146    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:e3:93:ff:24:96 ID:1,36:e3:93:ff:24:96 Lease:0x66eb0d6c}
	I0917 10:27:17.595182    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5a:8d:be:33:c3:18 ID:1,5a:8d:be:33:c3:18 Lease:0x66e9bbc3}
	I0917 10:27:17.595200    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b6:cf:5d:a2:4f:b0 ID:1,b6:cf:5d:a2:4f:b0 Lease:0x66eb0cdf}
	I0917 10:27:17.595210    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetConfigRaw
	I0917 10:27:17.595213    4448 main.go:141] libmachine: (ha-744000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:92:6:7e:7d:92 ID:1,72:92:6:7e:7d:92 Lease:0x66eb0c95}
	I0917 10:27:17.595230    4448 main.go:141] libmachine: (ha-744000-m02) DBG | Found match: 72:92:6:7e:7d:92
	I0917 10:27:17.595241    4448 main.go:141] libmachine: (ha-744000-m02) DBG | IP: 192.169.0.6
	I0917 10:27:17.595879    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:17.596065    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/ha-744000/config.json ...
	I0917 10:27:17.596597    4448 machine.go:93] provisionDockerMachine start ...
	I0917 10:27:17.596609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:17.596723    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:17.596804    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:17.596890    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:17.597096    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:17.597227    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:17.597374    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:17.597383    4448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:27:17.600658    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:27:17.609248    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:27:17.610115    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:17.610129    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:17.610159    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:17.610179    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:17.995972    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:27:17.995987    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:27:18.110623    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:27:18.110642    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:27:18.110651    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:27:18.110657    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:27:18.111459    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:27:18.111468    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:27:23.703289    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:27:23.703415    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:27:23.703428    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:27:23.727083    4448 main.go:141] libmachine: (ha-744000-m02) DBG | 2024/09/17 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:27:28.668165    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:27:28.668207    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668348    4448 buildroot.go:166] provisioning hostname "ha-744000-m02"
	I0917 10:27:28.668359    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.668445    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.668533    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.668618    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.668813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.668945    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.669097    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.669106    4448 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-744000-m02 && echo "ha-744000-m02" | sudo tee /etc/hostname
	I0917 10:27:28.749259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-744000-m02
	
	I0917 10:27:28.749274    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.749405    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.749513    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749609    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.749700    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.749847    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:28.749994    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:28.750009    4448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-744000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-744000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-744000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:27:28.821499    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:27:28.821514    4448 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:27:28.821523    4448 buildroot.go:174] setting up certificates
	I0917 10:27:28.821528    4448 provision.go:84] configureAuth start
	I0917 10:27:28.821534    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetMachineName
	I0917 10:27:28.821669    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:28.821789    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.821885    4448 provision.go:143] copyHostCerts
	I0917 10:27:28.821910    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.821968    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:27:28.821973    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:27:28.822114    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:27:28.822315    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822354    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:27:28.822366    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:27:28.822450    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:27:28.822596    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822635    4448 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:27:28.822639    4448 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:27:28.822717    4448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:27:28.822857    4448 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.ha-744000-m02 san=[127.0.0.1 192.169.0.6 ha-744000-m02 localhost minikube]
	I0917 10:27:28.955024    4448 provision.go:177] copyRemoteCerts
	I0917 10:27:28.955079    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:27:28.955094    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:28.955239    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:28.955341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:28.955430    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:28.955526    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:28.994909    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 10:27:28.994978    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 10:27:29.014096    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 10:27:29.014170    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:27:29.033197    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 10:27:29.033261    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:27:29.052129    4448 provision.go:87] duration metric: took 230.592645ms to configureAuth
	I0917 10:27:29.052147    4448 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:27:29.052322    4448 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:27:29.052336    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:29.052473    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.052573    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.052670    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052755    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.052827    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.052942    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.053069    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.053076    4448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:27:29.116259    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:27:29.116272    4448 buildroot.go:70] root file system type: tmpfs
	I0917 10:27:29.116365    4448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:27:29.116377    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.116506    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.116595    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116715    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.116793    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.116936    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.117075    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.117118    4448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:27:29.192146    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:27:29.192170    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:29.192303    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:29.192391    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192497    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:29.192577    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:29.192705    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:29.192844    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:29.192856    4448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:27:30.870717    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:27:30.870732    4448 machine.go:96] duration metric: took 13.274043119s to provisionDockerMachine
	I0917 10:27:30.870747    4448 start.go:293] postStartSetup for "ha-744000-m02" (driver="hyperkit")
	I0917 10:27:30.870755    4448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:27:30.870766    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.870980    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:27:30.870994    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.871125    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.871248    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.871341    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.871432    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.914708    4448 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:27:30.918099    4448 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:27:30.918113    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:27:30.918212    4448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:27:30.918387    4448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:27:30.918394    4448 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> /etc/ssl/certs/21212.pem
	I0917 10:27:30.918605    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:27:30.929083    4448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:27:30.958117    4448 start.go:296] duration metric: took 87.359751ms for postStartSetup
	I0917 10:27:30.958138    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:30.958316    4448 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 10:27:30.958328    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:30.958426    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:30.958518    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:30.958597    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:30.958669    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:30.998754    4448 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 10:27:30.998827    4448 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 10:27:31.054686    4448 fix.go:56] duration metric: took 13.567353836s for fixHost
	I0917 10:27:31.054713    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.054850    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.054939    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055014    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.055085    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.055233    4448 main.go:141] libmachine: Using SSH client type: native
	I0917 10:27:31.055380    4448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x33fc820] 0x33ff500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 10:27:31.055386    4448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:27:31.119216    4448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594051.159133703
	
	I0917 10:27:31.119227    4448 fix.go:216] guest clock: 1726594051.159133703
	I0917 10:27:31.119235    4448 fix.go:229] Guest: 2024-09-17 10:27:31.159133703 -0700 PDT Remote: 2024-09-17 10:27:31.054702 -0700 PDT m=+32.632454337 (delta=104.431703ms)
	I0917 10:27:31.119246    4448 fix.go:200] guest clock delta is within tolerance: 104.431703ms
	I0917 10:27:31.119250    4448 start.go:83] releasing machines lock for "ha-744000-m02", held for 13.631947572s
	I0917 10:27:31.119267    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.119393    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetIP
	I0917 10:27:31.143966    4448 out.go:177] * Found network options:
	I0917 10:27:31.164924    4448 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 10:27:31.185989    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.186029    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.186884    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187158    4448 main.go:141] libmachine: (ha-744000-m02) Calling .DriverName
	I0917 10:27:31.187319    4448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:27:31.187368    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	W0917 10:27:31.187382    4448 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 10:27:31.187491    4448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 10:27:31.187550    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHHostname
	I0917 10:27:31.187616    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187796    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.187813    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHPort
	I0917 10:27:31.187986    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188002    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHKeyPath
	I0917 10:27:31.188154    4448 main.go:141] libmachine: (ha-744000-m02) Calling .GetSSHUsername
	I0917 10:27:31.188197    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	I0917 10:27:31.188284    4448 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m02/id_rsa Username:docker}
	W0917 10:27:31.224656    4448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:27:31.224727    4448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:27:31.272646    4448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:27:31.272663    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.272743    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.288486    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 10:27:31.297401    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:27:31.306736    4448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.306808    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:27:31.316018    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.325058    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:27:31.334512    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:27:31.343837    4448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:27:31.353242    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:27:31.362032    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:27:31.371387    4448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:27:31.380261    4448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:27:31.388512    4448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:27:31.396778    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.496690    4448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:27:31.515568    4448 start.go:495] detecting cgroup driver to use...
	I0917 10:27:31.515642    4448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:27:31.540737    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.552945    4448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:27:31.572641    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:27:31.584129    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.595235    4448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:27:31.619571    4448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:27:31.631020    4448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:27:31.646195    4448 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:27:31.649235    4448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:27:31.657206    4448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 10:27:31.670819    4448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:27:31.769091    4448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:27:31.876805    4448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:27:31.876827    4448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:27:31.890932    4448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:27:31.985803    4448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:28:33.019399    4448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.033193508s)
	I0917 10:28:33.019489    4448 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:28:33.055431    4448 out.go:201] 
	W0917 10:28:33.077249    4448 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:27:29 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.538749787Z" level=info msg="Starting up"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.539378325Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:27:29 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:29.541084999Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=490
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.558457504Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573199339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573220908Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573258162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573299725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573411020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573446242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573553666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573587921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573599847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573607195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573685739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.573880273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575404717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575443775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575555494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575590640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575719071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.575763589Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.577951289Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578038703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578076919Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578089302Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578157091Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.578202689Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580641100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580726566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580738845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580747690Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580756580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580765114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580772643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580781164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580790542Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580798635Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580806480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580814346Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580832655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580847752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580858242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580866931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580879634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580890299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580898230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580906575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580914939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580923943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580931177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580940500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580948337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580963023Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580980668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580989498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.580996636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581056206Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581091289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581104079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581113194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581120030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581133102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581145706Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581334956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581407817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581460834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:27:29 ha-744000-m02 dockerd[490]: time="2024-09-17T17:27:29.581473448Z" level=info msg="containerd successfully booted in 0.023887s"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.569483774Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.598149093Z" level=info msg="Loading containers: start."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.772640000Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.832682998Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.874141710Z" level=info msg="Loading containers: done."
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885048604Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.885231945Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907500544Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:27:30 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:30.907671752Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:27:30 ha-744000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.038076014Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039237554Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:27:32 ha-744000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039672384Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039926596Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:27:32 ha-744000-m02 dockerd[484]: time="2024-09-17T17:27:32.039966362Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:27:33 ha-744000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:27:33 ha-744000-m02 dockerd[1165]: time="2024-09-17T17:27:33.083664420Z" level=info msg="Starting up"
	Sep 17 17:28:33 ha-744000-m02 dockerd[1165]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:28:33 ha-744000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:28:33.077325    4448 out.go:270] * 
	W0917 10:28:33.078575    4448 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:28:33.141292    4448 out.go:201] 
	
	
	==> Docker <==
	Sep 17 17:27:55 ha-744000 dockerd[1184]: time="2024-09-17T17:27:55.364009200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982686773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982795889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982809691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:04 ha-744000 dockerd[1184]: time="2024-09-17T17:28:04.982891719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908438866Z" level=info msg="shim disconnected" id=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908495753Z" level=warning msg="cleaning up after shim disconnected" id=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1184]: time="2024-09-17T17:28:15.908504694Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:15 ha-744000 dockerd[1178]: time="2024-09-17T17:28:15.909053440Z" level=info msg="ignoring event" container=66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.924890203Z" level=info msg="shim disconnected" id=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.925281000Z" level=warning msg="cleaning up after shim disconnected" id=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1184]: time="2024-09-17T17:28:26.925315687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:26 ha-744000 dockerd[1178]: time="2024-09-17T17:28:26.926104549Z" level=info msg="ignoring event" container=6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981215245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981300627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981313170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:35 ha-744000 dockerd[1184]: time="2024-09-17T17:28:35.981748827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988154215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988302802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988330908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:46 ha-744000 dockerd[1184]: time="2024-09-17T17:28:46.988447275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 17:28:56 ha-744000 dockerd[1184]: time="2024-09-17T17:28:56.389051429Z" level=info msg="shim disconnected" id=b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963 namespace=moby
	Sep 17 17:28:56 ha-744000 dockerd[1184]: time="2024-09-17T17:28:56.389386460Z" level=warning msg="cleaning up after shim disconnected" id=b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963 namespace=moby
	Sep 17 17:28:56 ha-744000 dockerd[1184]: time="2024-09-17T17:28:56.389462873Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 17:28:56 ha-744000 dockerd[1178]: time="2024-09-17T17:28:56.389846849Z" level=info msg="ignoring event" container=b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3757e12da538a       175ffd71cce3d       16 seconds ago       Running             kube-controller-manager   5                   ac5039c087055       kube-controller-manager-ha-744000
	b526083efb4fc       6bab7719df100       27 seconds ago       Exited              kube-apiserver            4                   049299c96bb2c       kube-apiserver-ha-744000
	6b1d67e1da594       175ffd71cce3d       58 seconds ago       Exited              kube-controller-manager   4                   ac5039c087055       kube-controller-manager-ha-744000
	bbf0d2ebe5c6c       9aa1fad941575       About a minute ago   Running             kube-scheduler            2                   339a7c29b977e       kube-scheduler-ha-744000
	1e359ca4a791e       2e96e5913fc06       About a minute ago   Running             etcd                      2                   bf723b1d8bf7c       etcd-ha-744000
	6df162190be2a       38af8ddebf499       About a minute ago   Running             kube-vip                  1                   026314418eb78       kube-vip-ha-744000
	1b95d7a1c7708       6e38f40d628db       3 minutes ago        Exited              storage-provisioner       2                   375cde06a4bcf       storage-provisioner
	079da006755a7       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   f0eee6e67fe42       busybox-7dff88458-cn52t
	9f76145e8eaf7       12968670680f4       4 minutes ago        Exited              kindnet-cni               1                   8b4b5191649e7       kindnet-c59lr
	6a4aba3acb1e9       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   3888ce04e78db       coredns-7c65d6cfc9-khnlh
	fb8b83fe49a6e       60c005f310ff3       4 minutes ago        Exited              kube-proxy                1                   f1782d63db94f       kube-proxy-6xd2h
	24cfd031ec879       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   244f5bc456efc       coredns-7c65d6cfc9-j9jcc
	cfbfd57cf2b56       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   433c480eea542       kube-vip-ha-744000
	a7645ef2ae8dd       9aa1fad941575       5 minutes ago        Exited              kube-scheduler            1                   fbf79ae31cbab       kube-scheduler-ha-744000
	23a7e0d95a77c       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   55cb3d05ddf34       etcd-ha-744000
	
	
	==> coredns [24cfd031ec87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52682 - 33898 "HINFO IN 2709939145458862568.721558315158165230. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009931439s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[318103159]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.683) (total time: 30003ms):
	Trace[318103159]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:24:50.686)
	Trace[318103159]: [30.003131559s] [30.003131559s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1979128092]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1979128092]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1979128092]: [30.000652416s] [30.000652416s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1978210991]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.688) (total time: 30000ms):
	Trace[1978210991]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:24:50.688)
	Trace[1978210991]: [30.000766886s] [30.000766886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a4aba3acb1e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60360 - 19575 "HINFO IN 3607648931521447410.3411894034218696920. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009401347s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1960564509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1960564509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.746)
	Trace[1960564509]: [30.00213331s] [30.00213331s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1197674287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30002ms):
	Trace[1197674287]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[1197674287]: [30.002759704s] [30.002759704s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[633118280]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:24:20.745) (total time: 30003ms):
	Trace[633118280]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:24:50.747)
	Trace[633118280]: [30.003193097s] [30.003193097s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0917 17:29:03.049832    3408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:29:03.051280    3408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:29:03.053146    3408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:29:03.055033    3408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 17:29:03.056759    3408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035209] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007985] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[Sep17 17:27] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006963] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.845078] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.235754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000048] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.478686] systemd-fstab-generator[466]: Ignoring "noauto" option for root device
	[  +0.092656] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +2.006519] systemd-fstab-generator[1106]: Ignoring "noauto" option for root device
	[  +0.259762] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.049883] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.051714] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.112681] systemd-fstab-generator[1170]: Ignoring "noauto" option for root device
	[  +2.485271] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	[  +0.103516] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.100618] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.134329] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.431436] systemd-fstab-generator[1594]: Ignoring "noauto" option for root device
	[  +6.580361] kauditd_printk_skb: 212 callbacks suppressed
	[ +21.488197] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [1e359ca4a791] <==
	{"level":"info","ts":"2024-09-17T17:28:58.595884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:58.596021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:58.596042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:28:58.596061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:28:59.078913Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336213,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:28:59.263482Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"429e60237c9af887","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:28:59.263606Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"429e60237c9af887","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:28:59.550788Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-17T17:28:59.550978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000986204s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-17T17:28:59.551037Z","caller":"traceutil/trace.go:171","msg":"trace[1448586196] range","detail":"{range_begin:; range_end:; }","duration":"7.001056931s","start":"2024-09-17T17:28:52.549966Z","end":"2024-09-17T17:28:59.551023Z","steps":["trace[1448586196] 'agreement among raft nodes before linearized reading'  (duration: 7.000983297s)"],"step_count":1}
	{"level":"error","ts":"2024-09-17T17:28:59.551118Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-17T17:29:00.095087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:00.095166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:00.095179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:00.095190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:01.595733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:01.595766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:01.595775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:01.595786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	{"level":"warn","ts":"2024-09-17T17:29:02.185859Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-744000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-09-17T17:29:03.048141Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741143707336215,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T17:29:03.095615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:03.095678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:03.095694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-17T17:29:03.095707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2905] sent MsgPreVote request to 429e60237c9af887 at term 3"}
	
	
	==> etcd [23a7e0d95a77] <==
	{"level":"warn","ts":"2024-09-17T17:26:50.587150Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.962871734s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.169.0.5\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587161Z","caller":"traceutil/trace.go:171","msg":"trace[618307594] range","detail":"{range_begin:/registry/masterleases/192.169.0.5; range_end:; }","duration":"6.962884303s","start":"2024-09-17T17:26:43.624274Z","end":"2024-09-17T17:26:50.587158Z","steps":["trace[618307594] 'agreement among raft nodes before linearized reading'  (duration: 6.96287178s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587171Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:43.624238Z","time spent":"6.962930406s","remote":"127.0.0.1:50532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":0,"request content":"key:\"/registry/masterleases/192.169.0.5\" "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.551739854s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587269Z","caller":"traceutil/trace.go:171","msg":"trace[474401785] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"1.551753744s","start":"2024-09-17T17:26:49.035511Z","end":"2024-09-17T17:26:50.587265Z","steps":["trace[474401785] 'agreement among raft nodes before linearized reading'  (duration: 1.551739815s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587280Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:49.035495Z","time spent":"1.551781157s","remote":"127.0.0.1:50648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.571949422s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:26:50.587333Z","caller":"traceutil/trace.go:171","msg":"trace[779412434] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"3.571960909s","start":"2024-09-17T17:26:47.015370Z","end":"2024-09-17T17:26:50.587331Z","steps":["trace[779412434] 'agreement among raft nodes before linearized reading'  (duration: 3.571949266s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:26:50.587344Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:47.015364Z","time spent":"3.571976754s","remote":"127.0.0.1:50872","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:26:50.587635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:26:45.985835Z","time spent":"4.601799065s","remote":"127.0.0.1:50768","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/09/17 17:26:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-17T17:26:50.686768Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T17:26:50.686883Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686894Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686906Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686956Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.686981Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.687003Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.687012Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"429e60237c9af887"}
	{"level":"info","ts":"2024-09-17T17:26:50.698284Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:26:50.698463Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T17:26:50.698473Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-744000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:29:03 up 2 min,  0 users,  load average: 0.13, 0.10, 0.04
	Linux ha-744000 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9f76145e8eaf] <==
	I0917 17:26:11.511367       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:11.512152       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:11.512248       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:11.512772       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:11.512871       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504250       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:21.504302       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:21.504625       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:21.504682       1 main.go:299] handling current node
	I0917 17:26:21.504706       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:21.504715       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:21.504816       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 17:26:21.504869       1 main.go:322] Node ha-744000-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:31.506309       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:31.506431       1 main.go:299] handling current node
	I0917 17:26:31.506449       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:31.506462       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:31.506621       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:31.506656       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:41.505932       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 17:26:41.506052       1 main.go:322] Node ha-744000-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:41.506553       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 17:26:41.506833       1 main.go:322] Node ha-744000-m04 has CIDR [10.244.4.0/24] 
	I0917 17:26:41.507226       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 17:26:41.507357       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b526083efb4f] <==
	I0917 17:28:36.086684       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 17:28:36.088246       1 server.go:142] Version: v1.31.1
	I0917 17:28:36.088285       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:36.354102       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 17:28:36.357696       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:28:36.370175       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 17:28:36.370322       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 17:28:36.370574       1 instance.go:232] Using reconciler: lease
	W0917 17:28:56.356155       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 17:28:56.356428       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 17:28:56.372981       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 17:28:56.373006       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3757e12da538] <==
	I0917 17:28:47.524594       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:47.708212       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:47.708245       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:47.709288       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:28:47.709434       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:47.709442       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:47.709457       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [6b1d67e1da59] <==
	I0917 17:28:05.497749       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:06.034875       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:06.034965       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:06.036148       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:06.036157       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 17:28:06.036166       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:06.036173       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 17:28:26.901132       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [fb8b83fe49a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:24:21.123827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:24:21.146583       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 17:24:21.146876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:24:21.179243       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:24:21.179464       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:24:21.179596       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:24:21.183190       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:24:21.184459       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:24:21.184543       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:24:21.188244       1 config.go:199] "Starting service config controller"
	I0917 17:24:21.188350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:24:21.188588       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:24:21.188659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:24:21.192108       1 config.go:328] "Starting node config controller"
	I0917 17:24:21.192216       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:24:21.289888       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:24:21.289903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:24:21.293411       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a7645ef2ae8d] <==
	E0917 17:23:52.361916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.361961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:23:52.361995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:23:52.362165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:23:52.362240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:23:52.362314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:23:52.362490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:23:52.362567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:23:52.362640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:23:52.362690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:23:52.362757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:23:52.362799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 17:23:53.372962       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 17:26:50.603688       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bbf0d2ebe5c6] <==
	E0917 17:28:29.812295       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:31.899209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:31.899308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:32.373782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:32.373902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:35.010233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:35.010333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:28:46.379121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:46.379226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:47.366426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:47.366523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:48.382767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:48.383125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:49.591786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:49.592123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:49.647843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:49.648127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:50.257456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 17:28:50.257486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 17:28:59.608489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:28:59.608581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:02.003249       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:29:02.003304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:03.803116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 17:29:03.803208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 17 17:28:40 ha-744000 kubelet[1601]: I0917 17:28:40.664516    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:42 ha-744000 kubelet[1601]: E0917 17:28:42.873716    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:28:42 ha-744000 kubelet[1601]: E0917 17:28:42.873771    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:28:45 ha-744000 kubelet[1601]: W0917 17:28:45.945277    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:45 ha-744000 kubelet[1601]: E0917 17:28:45.945790    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:46 ha-744000 kubelet[1601]: I0917 17:28:46.944782    1601 scope.go:117] "RemoveContainer" containerID="6b1d67e1da5948298632ad424519f8fce6e26a26617e516f98f85ba276454721"
	Sep 17 17:28:46 ha-744000 kubelet[1601]: E0917 17:28:46.945144    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:49 ha-744000 kubelet[1601]: E0917 17:28:49.017860    1601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-744000.17f61820eeb0604a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-744000,UID:ha-744000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-744000,},FirstTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,LastTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-744000,}"
	Sep 17 17:28:49 ha-744000 kubelet[1601]: I0917 17:28:49.875548    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:52 ha-744000 kubelet[1601]: E0917 17:28:52.089297    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:28:52 ha-744000 kubelet[1601]: E0917 17:28:52.090072    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:28:56 ha-744000 kubelet[1601]: E0917 17:28:56.945624    1601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-744000\" not found"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: I0917 17:28:57.244872    1601 scope.go:117] "RemoveContainer" containerID="66235de21ec80d860e8f0e9cfafa05214e465c4d09678b01e80ca97694636937"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: I0917 17:28:57.245459    1601 scope.go:117] "RemoveContainer" containerID="b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963"
	Sep 17 17:28:57 ha-744000 kubelet[1601]: E0917 17:28:57.245556    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-744000_kube-system(5122b3c5b6b107f6a71d263fb9595f1e)\"" pod="kube-system/kube-apiserver-ha-744000" podUID="5122b3c5b6b107f6a71d263fb9595f1e"
	Sep 17 17:28:58 ha-744000 kubelet[1601]: W0917 17:28:58.233904    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:58 ha-744000 kubelet[1601]: E0917 17:28:58.234030    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:58 ha-744000 kubelet[1601]: W0917 17:28:58.233904    1601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-744000&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 17:28:58 ha-744000 kubelet[1601]: E0917 17:28:58.234177    1601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-744000&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 17:28:59 ha-744000 kubelet[1601]: I0917 17:28:59.091390    1601 kubelet_node_status.go:72] "Attempting to register node" node="ha-744000"
	Sep 17 17:28:59 ha-744000 kubelet[1601]: I0917 17:28:59.411014    1601 scope.go:117] "RemoveContainer" containerID="b526083efb4fc73885b1a2e3bf2184b3f5c79bf052ac174a124d5ca46b0a4963"
	Sep 17 17:28:59 ha-744000 kubelet[1601]: E0917 17:28:59.411150    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-744000_kube-system(5122b3c5b6b107f6a71d263fb9595f1e)\"" pod="kube-system/kube-apiserver-ha-744000" podUID="5122b3c5b6b107f6a71d263fb9595f1e"
	Sep 17 17:29:01 ha-744000 kubelet[1601]: E0917 17:29:01.305083    1601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-744000"
	Sep 17 17:29:01 ha-744000 kubelet[1601]: E0917 17:29:01.305162    1601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-744000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 17:29:01 ha-744000 kubelet[1601]: E0917 17:29:01.305201    1601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-744000.17f61820eeb0604a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-744000,UID:ha-744000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-744000,},FirstTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,LastTimestamp:2024-09-17 17:27:16.865720394 +0000 UTC m=+0.127039804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-744000,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-744000 -n ha-744000: exit status 2 (157.582932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-744000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-101000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0917 10:33:58.535243    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-101000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.775303884s)

                                                
                                                
-- stdout --
	* [mount-start-1-101000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-101000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-101000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:b9:50:92:74:2d
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-101000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:3f:d3:dd:a2:d8
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:3f:d3:dd:a2:d8
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-101000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-101000 -n mount-start-1-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-101000 -n mount-start-1-101000: exit status 7 (79.493083ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:35:24.471677    5112 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 10:35:24.471704    5112 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-101000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.86s)

                                                
                                    
x
+
TestPreload (179.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-782000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-782000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m13.870103108s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-782000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-782000 image pull gcr.io/k8s-minikube/busybox: (1.503374649s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-782000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-782000: (8.385697119s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-782000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0917 10:46:20.002112    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:47:01.652035    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-782000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : exit status 90 (1m30.563571617s)

                                                
                                                
-- stdout --
	* [test-preload-782000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the hyperkit driver based on existing profile
	* Starting "test-preload-782000" primary control-plane node in "test-preload-782000" cluster
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing hyperkit VM for "test-preload-782000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:45:41.113648    5807 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:45:41.113905    5807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:45:41.113910    5807 out.go:358] Setting ErrFile to fd 2...
	I0917 10:45:41.113914    5807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:45:41.114084    5807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:45:41.115557    5807 out.go:352] Setting JSON to false
	I0917 10:45:41.138872    5807 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4508,"bootTime":1726590633,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:45:41.139017    5807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:45:41.160599    5807 out.go:177] * [test-preload-782000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:45:41.202764    5807 notify.go:220] Checking for updates...
	I0917 10:45:41.223592    5807 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:45:41.244867    5807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:45:41.265445    5807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:45:41.286658    5807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:45:41.307637    5807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:45:41.328505    5807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:45:41.350449    5807 config.go:182] Loaded profile config "test-preload-782000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0917 10:45:41.351249    5807 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:45:41.351316    5807 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:45:41.361046    5807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53546
	I0917 10:45:41.361404    5807 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:45:41.361793    5807 main.go:141] libmachine: Using API Version  1
	I0917 10:45:41.361808    5807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:45:41.362023    5807 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:45:41.362139    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:45:41.383415    5807 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 10:45:41.404757    5807 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:45:41.405314    5807 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:45:41.405358    5807 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:45:41.415028    5807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53548
	I0917 10:45:41.415365    5807 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:45:41.415673    5807 main.go:141] libmachine: Using API Version  1
	I0917 10:45:41.415681    5807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:45:41.415898    5807 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:45:41.416017    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:45:41.444781    5807 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:45:41.486371    5807 start.go:297] selected driver: hyperkit
	I0917 10:45:41.486400    5807 start.go:901] validating driver "hyperkit" against &{Name:test-preload-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:45:41.486628    5807 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:45:41.490450    5807 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:45:41.490570    5807 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 10:45:41.498845    5807 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 10:45:41.503419    5807 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:45:41.503442    5807 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 10:45:41.503546    5807 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 10:45:41.503576    5807 cni.go:84] Creating CNI manager for ""
	I0917 10:45:41.503632    5807 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 10:45:41.503703    5807 start.go:340] cluster config:
	{Name:test-preload-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-782000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:45:41.503793    5807 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 10:45:41.545814    5807 out.go:177] * Starting "test-preload-782000" primary control-plane node in "test-preload-782000" cluster
	I0917 10:45:41.566343    5807 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0917 10:45:41.666165    5807 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0917 10:45:41.666222    5807 cache.go:56] Caching tarball of preloaded images
	I0917 10:45:41.666573    5807 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0917 10:45:41.688572    5807 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0917 10:45:41.709974    5807 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0917 10:45:41.791400    5807 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4?checksum=md5:20cbd62a1b5d1968f21881a4a0f4f59e -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0917 10:45:55.542117    5807 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0917 10:45:55.542300    5807 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0917 10:45:56.118279    5807 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on docker
	I0917 10:45:56.118359    5807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/test-preload-782000/config.json ...
	I0917 10:45:56.142303    5807 start.go:360] acquireMachinesLock for test-preload-782000: {Name:mkce0cf35e1c9b6443a7e9ce598394c9889b0595 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 10:45:56.142442    5807 start.go:364] duration metric: took 110.577µs to acquireMachinesLock for "test-preload-782000"
	I0917 10:45:56.142516    5807 start.go:96] Skipping create...Using existing machine configuration
	I0917 10:45:56.142536    5807 fix.go:54] fixHost starting: 
	I0917 10:45:56.142970    5807 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:45:56.143009    5807 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:45:56.153889    5807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53554
	I0917 10:45:56.154410    5807 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:45:56.154872    5807 main.go:141] libmachine: Using API Version  1
	I0917 10:45:56.154881    5807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:45:56.155140    5807 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:45:56.155258    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:45:56.155387    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetState
	I0917 10:45:56.155475    5807 main.go:141] libmachine: (test-preload-782000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:45:56.155540    5807 main.go:141] libmachine: (test-preload-782000) DBG | hyperkit pid from json: 5730
	I0917 10:45:56.156581    5807 main.go:141] libmachine: (test-preload-782000) DBG | hyperkit pid 5730 missing from process table
	I0917 10:45:56.156619    5807 fix.go:112] recreateIfNeeded on test-preload-782000: state=Stopped err=<nil>
	I0917 10:45:56.156636    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	W0917 10:45:56.156735    5807 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 10:45:56.178269    5807 out.go:177] * Restarting existing hyperkit VM for "test-preload-782000" ...
	I0917 10:45:56.221230    5807 main.go:141] libmachine: (test-preload-782000) Calling .Start
	I0917 10:45:56.221484    5807 main.go:141] libmachine: (test-preload-782000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:45:56.221517    5807 main.go:141] libmachine: (test-preload-782000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/hyperkit.pid
	I0917 10:45:56.222984    5807 main.go:141] libmachine: (test-preload-782000) DBG | hyperkit pid 5730 missing from process table
	I0917 10:45:56.222998    5807 main.go:141] libmachine: (test-preload-782000) DBG | pid 5730 is in state "Stopped"
	I0917 10:45:56.223016    5807 main.go:141] libmachine: (test-preload-782000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/hyperkit.pid...
	I0917 10:45:56.223341    5807 main.go:141] libmachine: (test-preload-782000) DBG | Using UUID 7c07aea8-4c11-4389-9522-e8b414167db7
	I0917 10:45:56.332050    5807 main.go:141] libmachine: (test-preload-782000) DBG | Generated MAC aa:5d:13:28:65:8d
	I0917 10:45:56.332068    5807 main.go:141] libmachine: (test-preload-782000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-782000
	I0917 10:45:56.332205    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7c07aea8-4c11-4389-9522-e8b414167db7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbda0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 10:45:56.332242    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7c07aea8-4c11-4389-9522-e8b414167db7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bbda0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 10:45:56.332281    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7c07aea8-4c11-4389-9522-e8b414167db7", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/test-preload-782000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/bzimage,/Users/jenkins/m
inikube-integration/19662-1558/.minikube/machines/test-preload-782000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-782000"}
	I0917 10:45:56.332324    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7c07aea8-4c11-4389-9522-e8b414167db7 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/test-preload-782000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/tty,log=/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/console-ring -f kexec,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/bzimage,/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload
-782000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-782000"
	I0917 10:45:56.332337    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 10:45:56.333638    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 DEBUG: hyperkit: Pid is 5824
	I0917 10:45:56.333994    5807 main.go:141] libmachine: (test-preload-782000) DBG | Attempt 0
	I0917 10:45:56.334007    5807 main.go:141] libmachine: (test-preload-782000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:45:56.334060    5807 main.go:141] libmachine: (test-preload-782000) DBG | hyperkit pid from json: 5824
	I0917 10:45:56.335897    5807 main.go:141] libmachine: (test-preload-782000) DBG | Searching for aa:5d:13:28:65:8d in /var/db/dhcpd_leases ...
	I0917 10:45:56.336007    5807 main.go:141] libmachine: (test-preload-782000) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0917 10:45:56.336073    5807 main.go:141] libmachine: (test-preload-782000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:5d:13:28:65:8d ID:1,aa:5d:13:28:65:8d Lease:0x66eb117b}
	I0917 10:45:56.336099    5807 main.go:141] libmachine: (test-preload-782000) DBG | Found match: aa:5d:13:28:65:8d
	I0917 10:45:56.336109    5807 main.go:141] libmachine: (test-preload-782000) DBG | IP: 192.169.0.17
	I0917 10:45:56.336130    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetConfigRaw
	I0917 10:45:56.336774    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetIP
	I0917 10:45:56.336940    5807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/test-preload-782000/config.json ...
	I0917 10:45:56.337345    5807 machine.go:93] provisionDockerMachine start ...
	I0917 10:45:56.337357    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:45:56.337499    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:45:56.337625    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:45:56.337725    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:45:56.337815    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:45:56.337907    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:45:56.338122    5807 main.go:141] libmachine: Using SSH client type: native
	I0917 10:45:56.338404    5807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x77c1820] 0x77c4500 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0917 10:45:56.338413    5807 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 10:45:56.341650    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 10:45:56.393637    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 10:45:56.394322    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:45:56.394336    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:45:56.394344    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:45:56.394352    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:45:56.779056    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 10:45:56.779073    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 10:45:56.893813    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 10:45:56.893837    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 10:45:56.893848    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 10:45:56.893856    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 10:45:56.894761    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 10:45:56.894774    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:45:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 10:46:02.505649    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:46:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 10:46:02.505689    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:46:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 10:46:02.505696    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:46:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 10:46:02.529806    5807 main.go:141] libmachine: (test-preload-782000) DBG | 2024/09/17 10:46:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 10:46:07.416002    5807 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 10:46:07.416017    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetMachineName
	I0917 10:46:07.416163    5807 buildroot.go:166] provisioning hostname "test-preload-782000"
	I0917 10:46:07.416174    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetMachineName
	I0917 10:46:07.416272    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:07.416393    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:07.416492    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.416604    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.416688    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:07.416830    5807 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:07.416966    5807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x77c1820] 0x77c4500 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0917 10:46:07.416974    5807 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-782000 && echo "test-preload-782000" | sudo tee /etc/hostname
	I0917 10:46:07.493032    5807 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-782000
	
	I0917 10:46:07.493054    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:07.493181    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:07.493273    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.493366    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.493466    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:07.493618    5807 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:07.493766    5807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x77c1820] 0x77c4500 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0917 10:46:07.493778    5807 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-782000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-782000/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-782000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 10:46:07.565065    5807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 10:46:07.565088    5807 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19662-1558/.minikube CaCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19662-1558/.minikube}
	I0917 10:46:07.565109    5807 buildroot.go:174] setting up certificates
	I0917 10:46:07.565115    5807 provision.go:84] configureAuth start
	I0917 10:46:07.565123    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetMachineName
	I0917 10:46:07.565260    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetIP
	I0917 10:46:07.565366    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:07.565468    5807 provision.go:143] copyHostCerts
	I0917 10:46:07.565565    5807 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem, removing ...
	I0917 10:46:07.565573    5807 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem
	I0917 10:46:07.565717    5807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/ca.pem (1078 bytes)
	I0917 10:46:07.565948    5807 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem, removing ...
	I0917 10:46:07.565954    5807 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem
	I0917 10:46:07.566030    5807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/cert.pem (1123 bytes)
	I0917 10:46:07.566206    5807 exec_runner.go:144] found /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem, removing ...
	I0917 10:46:07.566218    5807 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem
	I0917 10:46:07.566300    5807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19662-1558/.minikube/key.pem (1675 bytes)
	I0917 10:46:07.566467    5807 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca-key.pem org=jenkins.test-preload-782000 san=[127.0.0.1 192.169.0.17 localhost minikube test-preload-782000]
	I0917 10:46:07.622804    5807 provision.go:177] copyRemoteCerts
	I0917 10:46:07.622861    5807 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 10:46:07.622874    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:07.623019    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:07.623117    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.623210    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:07.623294    5807 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/id_rsa Username:docker}
	I0917 10:46:07.663109    5807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 10:46:07.683391    5807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 10:46:07.703258    5807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 10:46:07.722926    5807 provision.go:87] duration metric: took 157.797892ms to configureAuth
	I0917 10:46:07.722939    5807 buildroot.go:189] setting minikube options for container-runtime
	I0917 10:46:07.723065    5807 config.go:182] Loaded profile config "test-preload-782000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0917 10:46:07.723078    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:46:07.723227    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:07.723319    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:07.723418    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.723493    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.723584    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:07.723708    5807 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:07.723829    5807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x77c1820] 0x77c4500 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0917 10:46:07.723836    5807 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 10:46:07.788190    5807 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 10:46:07.788205    5807 buildroot.go:70] root file system type: tmpfs
	I0917 10:46:07.788285    5807 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 10:46:07.788298    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:07.788437    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:07.788538    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.788650    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.788746    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:07.788896    5807 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:07.789038    5807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x77c1820] 0x77c4500 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0917 10:46:07.789081    5807 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 10:46:07.864617    5807 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 10:46:07.864646    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:07.864786    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:07.864877    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.864980    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:07.865068    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:07.865206    5807 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:07.865347    5807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x77c1820] 0x77c4500 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0917 10:46:07.865358    5807 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 10:46:09.472852    5807 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 10:46:09.472866    5807 machine.go:96] duration metric: took 13.135446664s to provisionDockerMachine
	I0917 10:46:09.472879    5807 start.go:293] postStartSetup for "test-preload-782000" (driver="hyperkit")
	I0917 10:46:09.472887    5807 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 10:46:09.472896    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:46:09.473104    5807 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 10:46:09.473118    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:09.473220    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:09.473324    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:09.473416    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:09.473499    5807 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/id_rsa Username:docker}
	I0917 10:46:09.515747    5807 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 10:46:09.519259    5807 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 10:46:09.519275    5807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/addons for local assets ...
	I0917 10:46:09.519378    5807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19662-1558/.minikube/files for local assets ...
	I0917 10:46:09.519568    5807 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem -> 21212.pem in /etc/ssl/certs
	I0917 10:46:09.519778    5807 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 10:46:09.536053    5807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/ssl/certs/21212.pem --> /etc/ssl/certs/21212.pem (1708 bytes)
	I0917 10:46:09.557379    5807 start.go:296] duration metric: took 84.492088ms for postStartSetup
	I0917 10:46:09.557403    5807 fix.go:56] duration metric: took 13.414810857s for fixHost
	I0917 10:46:09.557415    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:09.557553    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:09.557652    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:09.557747    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:09.557838    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:09.557973    5807 main.go:141] libmachine: Using SSH client type: native
	I0917 10:46:09.558108    5807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x77c1820] 0x77c4500 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0917 10:46:09.558115    5807 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 10:46:09.622920    5807 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726595169.667086393
	
	I0917 10:46:09.622934    5807 fix.go:216] guest clock: 1726595169.667086393
	I0917 10:46:09.622939    5807 fix.go:229] Guest: 2024-09-17 10:46:09.667086393 -0700 PDT Remote: 2024-09-17 10:46:09.557406 -0700 PDT m=+28.479451856 (delta=109.680393ms)
	I0917 10:46:09.622960    5807 fix.go:200] guest clock delta is within tolerance: 109.680393ms
	I0917 10:46:09.622964    5807 start.go:83] releasing machines lock for "test-preload-782000", held for 13.48043147s
	I0917 10:46:09.622980    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:46:09.623115    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetIP
	I0917 10:46:09.623222    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:46:09.623567    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:46:09.623675    5807 main.go:141] libmachine: (test-preload-782000) Calling .DriverName
	I0917 10:46:09.623847    5807 ssh_runner.go:195] Run: cat /version.json
	I0917 10:46:09.623862    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:09.623949    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:09.624034    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:09.624112    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:09.624191    5807 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/id_rsa Username:docker}
	I0917 10:46:09.624412    5807 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 10:46:09.624443    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHHostname
	I0917 10:46:09.624539    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHPort
	I0917 10:46:09.624626    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHKeyPath
	I0917 10:46:09.624715    5807 main.go:141] libmachine: (test-preload-782000) Calling .GetSSHUsername
	I0917 10:46:09.624793    5807 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/test-preload-782000/id_rsa Username:docker}
	I0917 10:46:09.657956    5807 ssh_runner.go:195] Run: systemctl --version
	I0917 10:46:09.700912    5807 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 10:46:09.706202    5807 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 10:46:09.706253    5807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 10:46:09.719048    5807 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 10:46:09.719060    5807 start.go:495] detecting cgroup driver to use...
	I0917 10:46:09.719159    5807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:46:09.734131    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0917 10:46:09.742284    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 10:46:09.750499    5807 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 10:46:09.750548    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 10:46:09.758883    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:46:09.767091    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 10:46:09.775209    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 10:46:09.783430    5807 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 10:46:09.791854    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 10:46:09.800169    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 10:46:09.808438    5807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 10:46:09.816600    5807 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 10:46:09.823974    5807 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 10:46:09.831382    5807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:09.932195    5807 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 10:46:09.950950    5807 start.go:495] detecting cgroup driver to use...
	I0917 10:46:09.951034    5807 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 10:46:09.970962    5807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:46:09.983544    5807 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 10:46:10.002349    5807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 10:46:10.014027    5807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:46:10.025259    5807 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 10:46:10.049136    5807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 10:46:10.060509    5807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 10:46:10.075929    5807 ssh_runner.go:195] Run: which cri-dockerd
	I0917 10:46:10.078962    5807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 10:46:10.086938    5807 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0917 10:46:10.100374    5807 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 10:46:10.195407    5807 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 10:46:10.311663    5807 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 10:46:10.311735    5807 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 10:46:10.325715    5807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 10:46:10.422214    5807 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 10:47:11.442006    5807 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019472833s)
	I0917 10:47:11.442105    5807 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 10:47:11.478695    5807 out.go:201] 
	W0917 10:47:11.501409    5807 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:46:08 test-preload-782000 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:46:08 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:08.211755091Z" level=info msg="Starting up"
	Sep 17 17:46:08 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:08.213656655Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:46:08 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:08.214198079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=491
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.228727142Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246303910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246371890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246438763Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246478781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246740026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246796629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246949918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247024646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247056380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247085261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247242802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247496150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249149686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249202033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249338592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249381337Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249533685Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249594723Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.252911421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.252974393Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253015549Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253049873Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253084154Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253154170Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253317653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253474887Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253518079Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253554692Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253586789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253631893Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253665954Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253697168Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253728053Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253757693Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253786471Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253814373Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253851574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253882483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253911104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253940904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253972113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254001465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254030689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254060315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254089749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254121230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254149933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254222015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254254310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254285636Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254320478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254351253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254383399Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254449041Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254490932Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254684350Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254777531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254813487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254848472Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254882981Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.255037960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.255155973Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.255302548Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.255345347Z" level=info msg="containerd successfully booted in 0.027274s"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.234265912Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.248924321Z" level=info msg="Loading containers: start."
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.381278161Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.442710823Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.486017504Z" level=info msg="Loading containers: done."
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.492773705Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.492901259Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.514207466Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.514332383Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:46:09 test-preload-782000 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.478720727Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.479612575Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.479676476Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.479707430Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.479718113Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:46:10 test-preload-782000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:46:11 test-preload-782000 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:46:11 test-preload-782000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:46:11 test-preload-782000 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:46:11 test-preload-782000 dockerd[912]: time="2024-09-17T17:46:11.513963089Z" level=info msg="Starting up"
	Sep 17 17:47:11 test-preload-782000 dockerd[912]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:47:11 test-preload-782000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:47:11 test-preload-782000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:47:11 test-preload-782000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 17:46:08 test-preload-782000 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:46:08 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:08.211755091Z" level=info msg="Starting up"
	Sep 17 17:46:08 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:08.213656655Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 17:46:08 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:08.214198079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=491
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.228727142Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246303910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246371890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246438763Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246478781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246740026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246796629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.246949918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247024646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247056380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247085261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247242802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.247496150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249149686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249202033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249338592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249381337Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249533685Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.249594723Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.252911421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.252974393Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253015549Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253049873Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253084154Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253154170Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253317653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253474887Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253518079Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253554692Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253586789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253631893Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253665954Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253697168Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253728053Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253757693Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253786471Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253814373Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253851574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253882483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253911104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253940904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.253972113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254001465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254030689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254060315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254089749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254121230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254149933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254222015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254254310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254285636Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254320478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254351253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254383399Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254449041Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254490932Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254684350Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254777531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254813487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254848472Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.254882981Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.255037960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.255155973Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.255302548Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 17:46:08 test-preload-782000 dockerd[491]: time="2024-09-17T17:46:08.255345347Z" level=info msg="containerd successfully booted in 0.027274s"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.234265912Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.248924321Z" level=info msg="Loading containers: start."
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.381278161Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.442710823Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.486017504Z" level=info msg="Loading containers: done."
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.492773705Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.492901259Z" level=info msg="Daemon has completed initialization"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.514207466Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 17:46:09 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:09.514332383Z" level=info msg="API listen on [::]:2376"
	Sep 17 17:46:09 test-preload-782000 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.478720727Z" level=info msg="Processing signal 'terminated'"
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.479612575Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.479676476Z" level=info msg="Daemon shutdown complete"
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.479707430Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 17:46:10 test-preload-782000 dockerd[484]: time="2024-09-17T17:46:10.479718113Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 17:46:10 test-preload-782000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 17:46:11 test-preload-782000 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 17:46:11 test-preload-782000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 17:46:11 test-preload-782000 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 17:46:11 test-preload-782000 dockerd[912]: time="2024-09-17T17:46:11.513963089Z" level=info msg="Starting up"
	Sep 17 17:47:11 test-preload-782000 dockerd[912]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 17:47:11 test-preload-782000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 17:47:11 test-preload-782000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 17:47:11 test-preload-782000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 10:47:11.501490    5807 out.go:270] * 
	* 
	W0917 10:47:11.502798    5807 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 10:47:11.566297    5807 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:68: out/minikube-darwin-amd64 start -p test-preload-782000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit  failed: exit status 90
panic.go:629: *** TestPreload FAILED at 2024-09-17 10:47:11.633124 -0700 PDT m=+3133.515807810
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-782000 -n test-preload-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-782000 -n test-preload-782000: exit status 6 (160.833485ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:47:11.782027    5836 status.go:417] kubeconfig endpoint: get endpoint: "test-preload-782000" does not appear in /Users/jenkins/minikube-integration/19662-1558/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-782000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-782000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-782000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-782000: (5.263199494s)
--- FAIL: TestPreload (179.76s)

                                                
                                    
x
+
TestScheduledStopUnix (142s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-530000 --memory=2048 --driver=hyperkit 
E0917 10:48:58.539762    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-530000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.664427578s)

                                                
                                                
-- stdout --
	* [scheduled-stop-530000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-530000" primary control-plane node in "scheduled-stop-530000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-530000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:42:42:b8:f2:38
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-530000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:aa:10:ab:b6:e4
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:aa:10:ab:b6:e4
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-530000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-530000" primary control-plane node in "scheduled-stop-530000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-530000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:42:42:b8:f2:38
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-530000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:aa:10:ab:b6:e4
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:aa:10:ab:b6:e4
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-17 10:49:33.722866 -0700 PDT m=+3275.604846316
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-530000 -n scheduled-stop-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-530000 -n scheduled-stop-530000: exit status 7 (79.074621ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 10:49:33.799712    5895 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 10:49:33.799733    5895 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-530000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-530000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-530000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-530000: (5.252332451s)
--- FAIL: TestScheduledStopUnix (142.00s)

                                                
                                    
x
+
TestPause/serial/Start (141.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-367000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-367000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m21.495003183s)

                                                
                                                
-- stdout --
	* [pause-367000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-367000" primary control-plane node in "pause-367000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-367000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:65:93:8f:c4:28
	* Failed to start hyperkit VM. Running "minikube delete -p pause-367000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:79:92:63:1c:69
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:79:92:63:1c:69
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-367000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-367000 -n pause-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-367000 -n pause-367000: exit status 7 (80.140938ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 11:33:23.717313    8169 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 11:33:23.717338    8169 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-367000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (144.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-747000 --no-kubernetes --driver=hyperkit 
E0917 11:33:58.591555    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-747000 --no-kubernetes --driver=hyperkit : signal: killed (2m24.508105993s)

                                                
                                                
-- stdout --
	* [NoKubernetes-747000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster NoKubernetes-747000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-747000 --no-kubernetes --driver=hyperkit " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-747000 -n NoKubernetes-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-747000 -n NoKubernetes-747000: exit status 7 (73.648347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (144.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (7201.731s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-553000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.1
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (1h3m25s)
		TestNetworkPlugins/group (10m49s)
		TestStartStop (23m53s)
		TestStartStop/group/default-k8s-diff-port (2m22s)
		TestStartStop/group/default-k8s-diff-port/serial (2m22s)
		TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1m15s)
		TestStartStop/group/newest-cni (19s)
		TestStartStop/group/newest-cni/serial (19s)
		TestStartStop/group/newest-cni/serial/FirstStart (19s)

                                                
                                                
goroutine 4659 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 26 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0006f1520, 0xc00006fbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000010300, {0x11f56f60, 0x2a, 0x2a}, {0xd6424d6?, 0xffffffffffffffff?, 0x11f7ae20?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000892aa0)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000892aa0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:129 +0xa8

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0005fc880)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 153 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc000233590, 0x2d)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00122bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0002336c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001230000, {0x1097dde0, 0xc000534090}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001230000, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 166
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 88 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0xff
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 87
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x167

                                                
                                                
goroutine 1760 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc001512780, 0xc00010ea10)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1759
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3782 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3798
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3457 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0008974d0, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc002206d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000897500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00196e340, {0x1097dde0, 0xc0019042a0}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00196e340, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3455
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4154 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0019b1190, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00157dd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019b11c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b43540, {0x1097dde0, 0xc0019059b0}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b43540, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3172 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00194f040, {0xf4165d0?, 0x0?}, 0xc001881f00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00194f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00194f040, 0xc0021c61c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3169
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3227 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019b0380, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3226 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 155 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 154 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc001229f50, 0xc001229f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x70?, 0xc001229f50, 0xc001229f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc000926680?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd7cc705?, 0xc000223800?, 0xc000068e70?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 166
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 166 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0002336c0, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 164
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 165 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 164
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4172 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019b11c0, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4150
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3346 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3345
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3236 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0019b0350, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00157ad80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019b0380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019f8280, {0x1097dde0, 0xc001f32900}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019f8280, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3227
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2633 [chan receive, 63 minutes]:
testing.(*T).Run(0xc001312680, {0xf414f6c?, 0x4064d034334?}, 0xc001507c68)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001312680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc001312680, 0x1096ebf8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1816 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018d0d80, 0xc001a8c5b0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1296
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3511 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3510
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4269 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4268
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3509 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0004b6250, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00157bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0004b6740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00012f160, {0x1097dde0, 0xc001f33a70}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00012f160, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3493
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3238 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3237
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1421 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1325
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3455 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000897500, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3453
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3711 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3707
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4657 [IO wait]:
internal/poll.runtime_pollWait(0x599e1e88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001f3cea0?, 0xc001544bff?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001f3cea0, {0xc001544bff, 0x9401, 0x9401})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0c408, {0xc001544bff?, 0x10?, 0xfe5c?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0022970e0, {0x1097c5d8, 0xc00172e0a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x1097c760, 0xc0022970e0}, {0x1097c5d8, 0xc00172e0a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x100000011e88880?, {0x1097c760, 0xc0022970e0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x11f17340?, {0x1097c760?, 0xc0022970e0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x1097c760, 0xc0022970e0}, {0x1097c6c0, 0xc000a0c408}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001738100?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4655
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3510 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc0015da750, 0xc0015da798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x30?, 0xc0015da750, 0xc0015da798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc00179e820?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd7cc705?, 0xc001512600?, 0xc001808230?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3493
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3804 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc00151c750, 0xc00151c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0xa8?, 0xc00151c750, 0xc00151c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc0012376c0?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00151c7d0?, 0xd7cc764?, 0xc001223c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3783
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4626 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x59a89c70, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001f3c000?, 0xc00148e4c9?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001f3c000, {0xc00148e4c9, 0x337, 0x337})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0c600, {0xc00148e4c9?, 0x59733bc8?, 0x20d?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00214a420, {0x1097c5d8, 0xc00172e340})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x1097c760, 0xc00214a420}, {0x1097c5d8, 0xc00172e340}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x11e88880?, {0x1097c760, 0xc00214a420})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x11f17340?, {0x1097c760?, 0xc00214a420?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x1097c760, 0xc00214a420}, {0x1097c6c0, 0xc000a0c600}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001738d00?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4625
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 4155 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc0015d6f50, 0xc0015d6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0xb0?, 0xc0015d6f50, 0xc0015d6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0x17c202020202020?, 0x20736e696b6e656a?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015d6fd0?, 0xd7cc764?, 0xc0018d0a80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3694 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3693
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1194 [IO wait, 101 minutes]:
internal/poll.runtime_pollWait(0x599e26c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0005fc900?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0005fc900)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0005fc900)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000a12040)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000a12040)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000248ff0, {0x10997a00, 0xc000a12040})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000248ff0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00194e680?, 0xc00194e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1191
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 4628 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018b5800, 0xc000068cb0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4625
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 1718 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc002362600, 0xc0016f7500)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1717
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3924 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc000488f50, 0xc000488f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x0?, 0xc000488f50, 0xc000488f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc00179e680?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000488fd0?, 0xd7cc764?, 0xc001b94cf0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3918
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3925 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3924
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4658 [select]:
os/exec.(*Cmd).watchCtx(0xc0018b4d80, 0xc000069500)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4655
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3712 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d60600, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3707
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4565 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4561
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2726 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0001590e0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1666 +0x5e5
testing.tRunner(0xc001312b60, 0xc001507c68)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2633
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4625 [syscall, 1 minutes]:
syscall.syscall6(0x129f9a68?, 0x90?, 0xc000911bf8?, 0x129f0108?, 0x90?, 0x100000d647fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc000911bb8?, 0xd643ac5?, 0x90?, 0x108d6e80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc00001e850?, 0xc000911bec, 0xc001491830?, 0xc002154bf0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc001389500)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0xd68e1b9?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0018b5800)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0018b5800)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00179f040, 0xc0018b5800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x109a4df0, 0xc00040c620}, 0xc00179f040, {0xc001d23aa0, 0x1c}, {0xf07b7a001516f58?, 0xc001516f60?}, {0xd7813f3?, 0xd6e022f?}, {0xc001ac3100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00179f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00179f040, 0xc001738d00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4555
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2691 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0019062d0, 0x1f)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000913d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001906300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007ea020, {0x1097dde0, 0xc00151e030}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007ea020, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2688
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1422 [chan receive, 99 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0004b6700, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1325
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3693 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc001526f50, 0xc0014f6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x60?, 0xc001526f50, 0xc001526f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0x4f53496562756b69?, 0x2f3a73707474683a?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd7cc705?, 0xc0019ef680?, 0xc000069260?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3712
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3237 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc0015db750, 0xc0015db798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0xc0?, 0xc0015db750, 0xc0015db798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc001237040?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015db7d0?, 0xd7cc764?, 0xc001d461c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3227
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3458 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc0015d9750, 0xc0015d9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x0?, 0xc0015d9750, 0xc0015d9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdcc27e5?, 0xc001a94de0?, 0x1099b660?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3455
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4326 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4322
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4578 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc001517f50, 0xc001517f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x20?, 0xc001517f50, 0xc001517f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc00179e9c0?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd7cc705?, 0xc001512780?, 0xc001808d20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4566
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3803 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000897dd0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00122ad80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000897e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019f9ab0, {0x1097dde0, 0xc0005293b0}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019f9ab0, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3783
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3692 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001d605d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013b2d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d60600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b429f0, {0x1097dde0, 0xc001b94d20}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b429f0, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3712
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3923 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0019b0ed0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001226d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019b0f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b42840, {0x1097dde0, 0xc001904a50}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b42840, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3918
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3329 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001907000, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3327
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1431 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1430
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1430 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc0015d8750, 0xc0014f8f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x50?, 0xc0015d8750, 0xc0015d8798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc001236820?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015d87d0?, 0xd7cc764?, 0xc0016f6850?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1422
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2712 [chan receive, 24 minutes]:
testing.(*T).Run(0xc001236340, {0xf414f6c?, 0xd7813f3?}, 0x1096edb8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001236340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001236340, 0x1096ec40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3454 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3453
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1838 [select, 97 minutes]:
net/http.(*persistConn).readLoop(0xc001374360)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1860
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3805 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3804
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3918 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019b0f00, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3916
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3328 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3327
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4555 [chan receive, 1 minutes]:
testing.(*T).Run(0xc001237860, {0xf42229e?, 0xc0017c2700?}, 0xc001738d00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001237860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001237860, 0xc001881f00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3172
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1839 [select, 97 minutes]:
net/http.(*persistConn).writeLoop(0xc001374360)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1860
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 4053 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4049
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3459 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3458
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1429 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0004b6650, 0x28)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001227d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0004b6700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009fe900, {0x1097dde0, 0xc001222750}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009fe900, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1422
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4566 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b984c0, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4561
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3493 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0004b6740, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3505
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3917 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3916
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3783 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000897e00, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3798
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2693 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2692
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3312 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001906fd0, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00157ed80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001907000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001269ca0, {0x1097dde0, 0xc0007e3680}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001269ca0, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3329
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3492 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3505
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2692 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc0014faf50, 0xc0014faf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x61?, 0xc0014faf50, 0xc0014faf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0x746e692d6562756b?, 0x6e6f697461726765?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00152d7d0?, 0xd7cc764?, 0x696d2f736e696b6e?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2688
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2687 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2654
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2688 [chan receive, 63 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001906300, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2654
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4655 [syscall]:
syscall.syscall6(0x129f9a68?, 0x90?, 0xc00133ec28?, 0x129f05b8?, 0x90?, 0x100000d647fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc00133ebe8?, 0xd643ac5?, 0x90?, 0x108d6e80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc00001e9a0?, 0xc00133ec1c, 0xc0013c0588?, 0xc002154ff0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc001388900)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0xd68e1b9?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0018b4d80)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0018b4d80)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00179e680, 0xc0018b4d80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x109a4df0?, 0xc00001e930?}, 0xc00179e680, {0xc0019d6e28?, 0x3276caf0?}, {0x3276caf0012fbf58?, 0xc0012fbf60?}, {0xd7813f3?, 0xd6e022f?}, {0xc000822700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xc5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00179e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00179e680, 0xc001738100)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4654
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1559 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018d1080, 0xc001a8c1c0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1558
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3345 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc0015d8750, 0xc0015d8798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0xf0?, 0xc0015d8750, 0xc0015d8798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc001312680?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015d87d0?, 0xd7cc764?, 0xc001d46af0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3329
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3169 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00194e820, 0x1096edb8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2712
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4579 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4578
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3171 [chan receive]:
testing.(*T).Run(0xc00194eea0, {0xf4165d0?, 0x0?}, 0xc001738080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00194eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00194eea0, 0xc0021c6180)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3169
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4656 [IO wait]:
internal/poll.runtime_pollWait(0x599e1f90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001f3cde0?, 0xc00128322f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001f3cde0, {0xc00128322f, 0x5d1, 0x5d1})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0c3c0, {0xc00128322f?, 0xc0015dbd50?, 0x22f?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0022970b0, {0x1097c5d8, 0xc00172e078})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x1097c760, 0xc0022970b0}, {0x1097c5d8, 0xc00172e078}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0015dbe78?, {0x1097c760, 0xc0022970b0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x11f17340?, {0x1097c760?, 0xc0022970b0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x1097c760, 0xc0022970b0}, {0x1097c6c0, 0xc000a0c3c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001a8c7e0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4655
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 4032 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4031
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4031 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc001526750, 0xc001526798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x74?, 0xc001526750, 0xc001526798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0x313a6f672e6e6961?, 0x6d62696c205d3134?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015267d0?, 0xd7cc764?, 0x756f2f6563617073?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4054
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4030 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001906c90, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001578d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001906cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007f0d80, {0x1097dde0, 0xc001264390}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007f0d80, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4054
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4054 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001906cc0, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4049
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4307 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc000487750, 0xc000487798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x0?, 0xc000487750, 0xc000487798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc0012364e0?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0004877d0?, 0xd7cc764?, 0xc0020e60f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4327
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4308 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4307
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4156 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4155
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4171 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4150
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4627 [IO wait]:
internal/poll.runtime_pollWait(0x599e1c78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001f3c0c0?, 0xc002fceb1f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001f3c0c0, {0xc002fceb1f, 0x1d4e1, 0x1d4e1})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0c618, {0xc002fceb1f?, 0xc0012f9550?, 0x1ffa3?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00214a510, {0x1097c5d8, 0xc00172e348})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x1097c760, 0xc00214a510}, {0x1097c5d8, 0xc00172e348}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0012f9678?, {0x1097c760, 0xc00214a510})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x11f17340?, {0x1097c760?, 0xc00214a510?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x1097c760, 0xc00214a510}, {0x1097c6c0, 0xc000a0c618}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc002133500?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4625
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 4327 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b99dc0, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4322
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4306 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001b99d90, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001289d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b99dc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021f02f0, {0x1097dde0, 0xc0009fd140}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021f02f0, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4327
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4268 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x109a5000, 0xc000068690}, 0xc001519750, 0xc001519798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x109a5000, 0xc000068690}, 0x50?, 0xc001519750, 0xc001519798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x109a5000?, 0xc000068690?}, 0xc001237860?, 0xd781d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015197d0?, 0xd7cc764?, 0xc001a8a1b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4279
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4577 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b98490, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001526d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b984c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019f8ad0, {0x1097dde0, 0xc0021d5a40}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019f8ad0, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4566
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4278 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x1099b660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4274
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4267 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc001d60410, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001287d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x109bfa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d60440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008f0c40, {0x1097dde0, 0xc00210a3f0}, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008f0c40, 0x3b9aca00, 0x0, 0x1, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4279
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4279 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d60440, 0xc000068690)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4274
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4654 [chan receive]:
testing.(*T).Run(0xc00179e4e0, {0xf42010f?, 0xc001a66e00?}, 0xc001738100)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00179e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00179e4e0, 0xc001738080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3171
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                    

Test pass (176/214)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.12
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.1/json-events 10.11
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.31
18 TestDownloadOnly/v1.31.1/DeleteAll 0.23
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
27 TestAddons/Setup 202.09
29 TestAddons/serial/Volcano 39.52
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 20.21
35 TestAddons/parallel/InspektorGadget 10.53
36 TestAddons/parallel/MetricsServer 5.48
37 TestAddons/parallel/HelmTiller 10.31
39 TestAddons/parallel/CSI 44.35
40 TestAddons/parallel/Headlamp 19.45
41 TestAddons/parallel/CloudSpanner 5.4
42 TestAddons/parallel/LocalPath 53.56
43 TestAddons/parallel/NvidiaDevicePlugin 5.34
44 TestAddons/parallel/Yakd 10.45
45 TestAddons/StoppedEnableDisable 5.94
53 TestHyperKitDriverInstallOrUpdate 8.83
56 TestErrorSpam/setup 35.56
57 TestErrorSpam/start 1.72
58 TestErrorSpam/status 0.51
59 TestErrorSpam/pause 1.35
60 TestErrorSpam/unpause 1.46
61 TestErrorSpam/stop 153.81
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 77.51
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 41.42
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.01
73 TestFunctional/serial/CacheCmd/cache/add_local 1.32
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.04
78 TestFunctional/serial/CacheCmd/cache/delete 0.16
79 TestFunctional/serial/MinikubeKubectlCmd 1.2
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.57
81 TestFunctional/serial/ExtraConfig 42.49
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 2.55
84 TestFunctional/serial/LogsFileCmd 2.54
85 TestFunctional/serial/InvalidService 3.95
87 TestFunctional/parallel/ConfigCmd 0.52
88 TestFunctional/parallel/DashboardCmd 10.07
89 TestFunctional/parallel/DryRun 1.21
90 TestFunctional/parallel/InternationalLanguage 0.78
91 TestFunctional/parallel/StatusCmd 0.51
95 TestFunctional/parallel/ServiceCmdConnect 7.56
96 TestFunctional/parallel/AddonsCmd 0.22
97 TestFunctional/parallel/PersistentVolumeClaim 27.21
99 TestFunctional/parallel/SSHCmd 0.3
100 TestFunctional/parallel/CpCmd 1.07
101 TestFunctional/parallel/MySQL 25.49
102 TestFunctional/parallel/FileSync 0.17
103 TestFunctional/parallel/CertSync 1
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.16
111 TestFunctional/parallel/License 0.45
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 0.39
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.26
119 TestFunctional/parallel/ImageCommands/Setup 1.83
120 TestFunctional/parallel/DockerEnv/bash 0.62
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.62
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.4
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
131 TestFunctional/parallel/ServiceCmd/DeployApp 22.12
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.14
137 TestFunctional/parallel/ServiceCmd/List 0.38
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
140 TestFunctional/parallel/ServiceCmd/Format 0.25
141 TestFunctional/parallel/ServiceCmd/URL 0.25
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
149 TestFunctional/parallel/ProfileCmd/profile_list 0.27
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
151 TestFunctional/parallel/MountCmd/any-port 5.97
152 TestFunctional/parallel/MountCmd/specific-port 1.52
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.2
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 184.44
161 TestMultiControlPlane/serial/DeployApp 8.99
162 TestMultiControlPlane/serial/PingHostFromPods 1.3
163 TestMultiControlPlane/serial/AddWorkerNode 50.31
164 TestMultiControlPlane/serial/NodeLabels 0.05
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.34
166 TestMultiControlPlane/serial/CopyFile 9.16
167 TestMultiControlPlane/serial/StopSecondaryNode 8.68
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.27
169 TestMultiControlPlane/serial/RestartSecondaryNode 41.59
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.33
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.26
174 TestMultiControlPlane/serial/StopCluster 24.98
181 TestImageBuild/serial/Setup 37.76
182 TestImageBuild/serial/NormalBuild 1.81
183 TestImageBuild/serial/BuildWithBuildArg 0.86
184 TestImageBuild/serial/BuildWithDockerIgnore 0.63
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
189 TestJSONOutput/start/Command 85.09
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.51
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.47
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 8.33
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.59
217 TestMainNoArgs 0.08
218 TestMinikubeProfile 89.8
224 TestMultiNode/serial/FreshStart2Nodes 111.86
225 TestMultiNode/serial/DeployApp2Nodes 5.87
226 TestMultiNode/serial/PingHostFrom2Pods 0.89
227 TestMultiNode/serial/AddNode 45.78
228 TestMultiNode/serial/MultiNodeLabels 0.05
229 TestMultiNode/serial/ProfileList 0.17
230 TestMultiNode/serial/CopyFile 5.25
231 TestMultiNode/serial/StopNode 2.84
232 TestMultiNode/serial/StartAfterStop 41.56
233 TestMultiNode/serial/RestartKeepsNodes 140.18
234 TestMultiNode/serial/DeleteNode 3.3
235 TestMultiNode/serial/StopMultiNode 16.81
236 TestMultiNode/serial/RestartMultiNode 97.94
237 TestMultiNode/serial/ValidateNameConflict 44.41
244 TestSkaffold 114.21
247 TestRunningBinaryUpgrade 111.8
249 TestKubernetesUpgrade 1496.54
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.23
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.09
264 TestStoppedBinaryUpgrade/Setup 1.83
265 TestStoppedBinaryUpgrade/Upgrade 105.69
266 TestStoppedBinaryUpgrade/MinikubeLogs 2.96
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.49
278 TestNoKubernetes/serial/StartWithK8s 97.62
279 TestNoKubernetes/serial/StartWithStopK8s 57.38
x
+
TestDownloadOnly/v1.20.0/json-events (25.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-073000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-073000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (25.11664791s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-073000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-073000: exit status 85 (291.637045ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-073000 | jenkins | v1.34.0 | 17 Sep 24 09:54 PDT |          |
	|         | -p download-only-073000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 09:54:58
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 09:54:58.139324    2123 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:54:58.139599    2123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:54:58.139605    2123 out.go:358] Setting ErrFile to fd 2...
	I0917 09:54:58.139608    2123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:54:58.139803    2123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	W0917 09:54:58.139895    2123 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19662-1558/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19662-1558/.minikube/config/config.json: no such file or directory
	I0917 09:54:58.141676    2123 out.go:352] Setting JSON to true
	I0917 09:54:58.164525    2123 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1465,"bootTime":1726590633,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 09:54:58.164674    2123 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 09:54:58.186993    2123 out.go:97] [download-only-073000] minikube v1.34.0 on Darwin 14.6.1
	I0917 09:54:58.187253    2123 notify.go:220] Checking for updates...
	W0917 09:54:58.187275    2123 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 09:54:58.208720    2123 out.go:169] MINIKUBE_LOCATION=19662
	I0917 09:54:58.230003    2123 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 09:54:58.251912    2123 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 09:54:58.272695    2123 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:54:58.294070    2123 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	W0917 09:54:58.336724    2123 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 09:54:58.337238    2123 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:54:58.391723    2123 out.go:97] Using the hyperkit driver based on user configuration
	I0917 09:54:58.391774    2123 start.go:297] selected driver: hyperkit
	I0917 09:54:58.391790    2123 start.go:901] validating driver "hyperkit" against <nil>
	I0917 09:54:58.391994    2123 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:54:58.392409    2123 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 09:54:58.790626    2123 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 09:54:58.795481    2123 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:54:58.795501    2123 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 09:54:58.795527    2123 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 09:54:58.800145    2123 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0917 09:54:58.800594    2123 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 09:54:58.800624    2123 cni.go:84] Creating CNI manager for ""
	I0917 09:54:58.800668    2123 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 09:54:58.800741    2123 start.go:340] cluster config:
	{Name:download-only-073000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-073000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:54:58.800968    2123 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:54:58.822342    2123 out.go:97] Downloading VM boot image ...
	I0917 09:54:58.822427    2123 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 09:55:06.317745    2123 out.go:97] Starting "download-only-073000" primary control-plane node in "download-only-073000" cluster
	I0917 09:55:06.317789    2123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:06.384263    2123 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0917 09:55:06.384290    2123 cache.go:56] Caching tarball of preloaded images
	I0917 09:55:06.384468    2123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:06.405000    2123 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 09:55:06.405018    2123 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 09:55:06.495885    2123 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0917 09:55:20.869089    2123 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 09:55:20.869276    2123 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 09:55:21.411197    2123 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 09:55:21.411447    2123 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/download-only-073000/config.json ...
	I0917 09:55:21.411473    2123 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/download-only-073000/config.json: {Name:mkbf52618a8726fa77f032a5ef1a9dadac8c7af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:55:21.411824    2123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 09:55:21.412172    2123 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-073000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-073000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-073000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (10.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-498000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-498000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit : (10.109759519s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (10.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-498000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-498000: exit status 85 (312.440264ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-073000 | jenkins | v1.34.0 | 17 Sep 24 09:54 PDT |                     |
	|         | -p download-only-073000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| delete  | -p download-only-073000        | download-only-073000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT | 17 Sep 24 09:55 PDT |
	| start   | -o=json --download-only        | download-only-498000 | jenkins | v1.34.0 | 17 Sep 24 09:55 PDT |                     |
	|         | -p download-only-498000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 09:55:23
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 09:55:23.991789    2152 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:55:23.991982    2152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:23.991988    2152 out.go:358] Setting ErrFile to fd 2...
	I0917 09:55:23.991991    2152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:55:23.992160    2152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 09:55:23.993672    2152 out.go:352] Setting JSON to true
	I0917 09:55:24.016216    2152 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1491,"bootTime":1726590633,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 09:55:24.016366    2152 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 09:55:24.038183    2152 out.go:97] [download-only-498000] minikube v1.34.0 on Darwin 14.6.1
	I0917 09:55:24.038391    2152 notify.go:220] Checking for updates...
	I0917 09:55:24.059685    2152 out.go:169] MINIKUBE_LOCATION=19662
	I0917 09:55:24.080819    2152 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 09:55:24.101866    2152 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 09:55:24.122879    2152 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:55:24.143842    2152 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	W0917 09:55:24.185648    2152 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 09:55:24.186168    2152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:55:24.215784    2152 out.go:97] Using the hyperkit driver based on user configuration
	I0917 09:55:24.215834    2152 start.go:297] selected driver: hyperkit
	I0917 09:55:24.215848    2152 start.go:901] validating driver "hyperkit" against <nil>
	I0917 09:55:24.216093    2152 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:55:24.216338    2152 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19662-1558/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 09:55:24.226195    2152 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 09:55:24.230092    2152 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 09:55:24.230110    2152 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 09:55:24.230136    2152 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 09:55:24.232843    2152 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0917 09:55:24.233000    2152 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 09:55:24.233037    2152 cni.go:84] Creating CNI manager for ""
	I0917 09:55:24.233079    2152 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 09:55:24.233090    2152 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 09:55:24.233160    2152 start.go:340] cluster config:
	{Name:download-only-498000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:55:24.233249    2152 iso.go:125] acquiring lock: {Name:mk601a4d51f4198cd9beb5e3a2e5ca4d3bc1b26c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 09:55:24.255664    2152 out.go:97] Starting "download-only-498000" primary control-plane node in "download-only-498000" cluster
	I0917 09:55:24.255697    2152 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:24.314294    2152 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 09:55:24.314327    2152 cache.go:56] Caching tarball of preloaded images
	I0917 09:55:24.314743    2152 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:24.338677    2152 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 09:55:24.338747    2152 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0917 09:55:24.426402    2152 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 09:55:31.600793    2152 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0917 09:55:31.600991    2152 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0917 09:55:32.067187    2152 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 09:55:32.067436    2152 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/download-only-498000/config.json ...
	I0917 09:55:32.067460    2152 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/download-only-498000/config.json: {Name:mkc4d487fd41d3288c5413499c462b6a9a163f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 09:55:32.067770    2152 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 09:55:32.067969    2152 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19662-1558/.minikube/cache/darwin/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-498000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-498000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-498000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-109000 --alsologtostderr --binary-mirror http://127.0.0.1:49644 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-109000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-109000
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-684000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-684000: exit status 85 (187.26826ms)

                                                
                                                
-- stdout --
	* Profile "addons-684000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-684000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-684000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-684000: exit status 85 (166.719416ms)

                                                
                                                
-- stdout --
	* Profile "addons-684000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-684000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (202.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-684000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-684000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m22.090679646s)
--- PASS: TestAddons/Setup (202.09s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.52s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 11.716903ms
addons_test.go:897: volcano-scheduler stabilized in 11.748159ms
addons_test.go:905: volcano-admission stabilized in 11.832685ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-q2rxs" [9c98cd99-ab66-432a-aef0-29ddb1d80258] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00439727s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-dtrbd" [33f09a43-993d-42f8-bd98-68041c5518bb] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003452156s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-xrkrf" [33b5985f-c97e-4b13-b41c-71d1d0117b7b] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0038125s
addons_test.go:932: (dbg) Run:  kubectl --context addons-684000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-684000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-684000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5a376f50-2f53-4530-884a-4ffb24baba8b] Pending
helpers_test.go:344: "test-job-nginx-0" [5a376f50-2f53-4530-884a-4ffb24baba8b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5a376f50-2f53-4530-884a-4ffb24baba8b] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.005871637s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-684000 addons disable volcano --alsologtostderr -v=1: (10.231742412s)
--- PASS: TestAddons/serial/Volcano (39.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-684000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-684000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-684000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-684000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-684000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a5f2d355-5a41-4566-81c0-5aa85695b7d8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a5f2d355-5a41-4566-81c0-5aa85695b7d8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004957603s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-684000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-684000 addons disable ingress --alsologtostderr -v=1: (7.463574108s)
--- PASS: TestAddons/parallel/Ingress (20.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.53s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9grhj" [bd7eb77f-bedf-4e60-89d3-91ee0d4203c9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.024717054s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-684000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-684000: (5.500105137s)
--- PASS: TestAddons/parallel/InspektorGadget (10.53s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.633163ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zg4lj" [02fe2d85-c7d2-493d-a247-c14e47795708] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004796614s
addons_test.go:417: (dbg) Run:  kubectl --context addons-684000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.48s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.838278ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-kmbwj" [98b0df5c-c437-4436-9bd8-2b7954038de5] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003748033s
addons_test.go:475: (dbg) Run:  kubectl --context addons-684000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-684000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.879115662s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.22853ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-684000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-684000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a10d922b-6120-4f70-87cf-c731c04d851c] Pending
helpers_test.go:344: "task-pv-pod" [a10d922b-6120-4f70-87cf-c731c04d851c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a10d922b-6120-4f70-87cf-c731c04d851c] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.005671994s
addons_test.go:590: (dbg) Run:  kubectl --context addons-684000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-684000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-684000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-684000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-684000 delete pod task-pv-pod: (1.286724066s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-684000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-684000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-684000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [00b7079a-f042-49c1-b949-1b3de891fcde] Pending
helpers_test.go:344: "task-pv-pod-restore" [00b7079a-f042-49c1-b949-1b3de891fcde] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [00b7079a-f042-49c1-b949-1b3de891fcde] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004052796s
addons_test.go:632: (dbg) Run:  kubectl --context addons-684000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-684000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-684000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-684000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.415796918s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-684000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-8r8j2" [de066acb-7fc1-43d4-9e9a-c42588481284] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8r8j2" [de066acb-7fc1-43d4-9e9a-c42588481284] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004133879s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-684000 addons disable headlamp --alsologtostderr -v=1: (5.49423705s)
--- PASS: TestAddons/parallel/Headlamp (19.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-p5j7h" [464bb95f-7339-4fcb-a10b-9aad3dd550bf] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005772398s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-684000
--- PASS: TestAddons/parallel/CloudSpanner (5.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.56s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-684000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-684000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-684000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ef890bd4-bf38-43db-9c37-21e8fb1e16fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ef890bd4-bf38-43db-9c37-21e8fb1e16fa] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ef890bd4-bf38-43db-9c37-21e8fb1e16fa] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004125516s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-684000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 ssh "cat /opt/local-path-provisioner/pvc-20e1842b-8e7a-4dee-90a3-9c75f161f7b1_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-684000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-684000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-684000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.76332357s)
--- PASS: TestAddons/parallel/LocalPath (53.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5kvkx" [ce22ef59-d12f-4358-a2a7-36598797d86a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004472325s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-684000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-q4hz7" [ba72f3a8-2644-4474-a69e-6bbe0a97abe9] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00432855s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-684000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-684000 addons disable yakd --alsologtostderr -v=1: (5.44912041s)
--- PASS: TestAddons/parallel/Yakd (10.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.94s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-684000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-684000: (5.391968372s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-684000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-684000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-684000
--- PASS: TestAddons/StoppedEnableDisable (5.94s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.83s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.83s)

                                                
                                    
x
+
TestErrorSpam/setup (35.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-890000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-890000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 --driver=hyperkit : (35.556378054s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (35.56s)

                                                
                                    
x
+
TestErrorSpam/start (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 start --dry-run
--- PASS: TestErrorSpam/start (1.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.51s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 status
--- PASS: TestErrorSpam/status (0.51s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 unpause
--- PASS: TestErrorSpam/unpause (1.46s)

                                                
                                    
x
+
TestErrorSpam/stop (153.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 stop: (3.398168784s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 stop: (1m15.206362959s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-890000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-890000 stop: (1m15.20496165s)
--- PASS: TestErrorSpam/stop (153.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19662-1558/.minikube/files/etc/test/nested/copy/2121/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.51s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-575000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0917 10:13:58.506468    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:13:58.553067    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:13:58.564335    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:13:58.585578    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:13:58.626955    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:13:58.709723    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:13:58.871172    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:13:59.192627    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:13:59.834098    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:01.117026    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:03.680388    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:08.802781    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:14:19.045889    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-575000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m17.512813269s)
--- PASS: TestFunctional/serial/StartWithProxy (77.51s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-575000 --alsologtostderr -v=8
E0917 10:14:39.528774    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-575000 --alsologtostderr -v=8: (41.414519124s)
functional_test.go:663: soft start took 41.415096773s for "functional-575000" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-575000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-575000 cache add registry.k8s.io/pause:3.1: (1.078546752s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-575000 cache add registry.k8s.io/pause:3.3: (1.068577343s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local14226325/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cache add minikube-local-cache-test:functional-575000
E0917 10:15:20.491806    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cache delete minikube-local-cache-test:functional-575000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-575000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (147.73676ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 kubectl -- --context functional-575000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-575000 kubectl -- --context functional-575000 get pods: (1.199449595s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-575000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-575000 get pods: (1.574097779s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-575000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-575000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.487881603s)
functional_test.go:761: restart took 42.488063506s for "functional-575000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-575000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-575000 logs: (2.550245741s)
--- PASS: TestFunctional/serial/LogsCmd (2.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3839664379/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-575000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3839664379/001/logs.txt: (2.537643199s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-575000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-575000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-575000: exit status 115 (273.694457ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:30719 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-575000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 config get cpus: exit status 14 (75.261577ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 config get cpus: exit status 14 (56.880259ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-575000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-575000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3762: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-575000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-575000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (637.54633ms)

                                                
                                                
-- stdout --
	* [functional-575000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:17:15.307933    3711 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:17:15.308727    3711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:17:15.308735    3711 out.go:358] Setting ErrFile to fd 2...
	I0917 10:17:15.308741    3711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:17:15.309290    3711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:17:15.310885    3711 out.go:352] Setting JSON to false
	I0917 10:17:15.333666    3711 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2802,"bootTime":1726590633,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:17:15.333822    3711 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:17:15.356438    3711 out.go:177] * [functional-575000] minikube v1.34.0 on Darwin 14.6.1
	I0917 10:17:15.398864    3711 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:17:15.398880    3711 notify.go:220] Checking for updates...
	I0917 10:17:15.440770    3711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:17:15.461826    3711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:17:15.482771    3711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:17:15.503839    3711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:17:15.545740    3711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:17:15.567886    3711 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:17:15.568634    3711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:17:15.568719    3711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:17:15.578494    3711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50933
	I0917 10:17:15.578906    3711 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:17:15.579327    3711 main.go:141] libmachine: Using API Version  1
	I0917 10:17:15.579358    3711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:17:15.579603    3711 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:17:15.579732    3711 main.go:141] libmachine: (functional-575000) Calling .DriverName
	I0917 10:17:15.579931    3711 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:17:15.580206    3711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:17:15.580232    3711 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:17:15.588782    3711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50935
	I0917 10:17:15.589111    3711 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:17:15.589468    3711 main.go:141] libmachine: Using API Version  1
	I0917 10:17:15.589488    3711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:17:15.589715    3711 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:17:15.589830    3711 main.go:141] libmachine: (functional-575000) Calling .DriverName
	I0917 10:17:15.658089    3711 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 10:17:15.721084    3711 start.go:297] selected driver: hyperkit
	I0917 10:17:15.721116    3711 start.go:901] validating driver "hyperkit" against &{Name:functional-575000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:functional-575000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:17:15.721327    3711 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:17:15.763053    3711 out.go:201] 
	W0917 10:17:15.804992    3711 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 10:17:15.847113    3711 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-575000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-575000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-575000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (776.070785ms)

                                                
                                                
-- stdout --
	* [functional-575000] minikube v1.34.0 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:17:15.567348    3716 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:17:15.567864    3716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:17:15.567875    3716 out.go:358] Setting ErrFile to fd 2...
	I0917 10:17:15.567882    3716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:17:15.568239    3716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:17:15.570694    3716 out.go:352] Setting JSON to false
	I0917 10:17:15.594305    3716 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2802,"bootTime":1726590633,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0917 10:17:15.594431    3716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 10:17:15.616024    3716 out.go:177] * [functional-575000] minikube v1.34.0 sur Darwin 14.6.1
	I0917 10:17:15.658255    3716 notify.go:220] Checking for updates...
	I0917 10:17:15.699955    3716 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 10:17:15.763042    3716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	I0917 10:17:15.804949    3716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 10:17:15.868022    3716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 10:17:15.930916    3716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	I0917 10:17:15.973136    3716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 10:17:15.995392    3716 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:17:15.995768    3716 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:17:15.995817    3716 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:17:16.004685    3716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50938
	I0917 10:17:16.005034    3716 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:17:16.005416    3716 main.go:141] libmachine: Using API Version  1
	I0917 10:17:16.005427    3716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:17:16.005701    3716 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:17:16.005827    3716 main.go:141] libmachine: (functional-575000) Calling .DriverName
	I0917 10:17:16.006034    3716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 10:17:16.006292    3716 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:17:16.006342    3716 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:17:16.015105    3716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50940
	I0917 10:17:16.015442    3716 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:17:16.015765    3716 main.go:141] libmachine: Using API Version  1
	I0917 10:17:16.015785    3716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:17:16.016015    3716 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:17:16.016128    3716 main.go:141] libmachine: (functional-575000) Calling .DriverName
	I0917 10:17:16.078114    3716 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0917 10:17:16.119971    3716 start.go:297] selected driver: hyperkit
	I0917 10:17:16.119985    3716 start.go:901] validating driver "hyperkit" against &{Name:functional-575000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:functional-575000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 10:17:16.120090    3716 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 10:17:16.178914    3716 out.go:201] 
	W0917 10:17:16.221236    3716 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 10:17:16.263218    3716 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-575000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-575000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-z7p44" [4d762f68-25cb-4ba1-bfbb-ea380b86c52c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-z7p44" [4d762f68-25cb-4ba1-bfbb-ea380b86c52c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005721123s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:32516
functional_test.go:1675: http://192.169.0.4:32516: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-z7p44

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:32516
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7f447c45-afab-49f7-95e2-61363bc0e553] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003406238s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-575000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-575000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-575000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-575000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [367d77d0-daea-4980-92e5-67ea0a52386e] Pending
helpers_test.go:344: "sp-pod" [367d77d0-daea-4980-92e5-67ea0a52386e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [367d77d0-daea-4980-92e5-67ea0a52386e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003853006s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-575000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-575000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-575000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8fa6a306-aabb-41ce-8671-b2a8d0a42506] Pending
helpers_test.go:344: "sp-pod" [8fa6a306-aabb-41ce-8671-b2a8d0a42506] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8fa6a306-aabb-41ce-8671-b2a8d0a42506] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003338258s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-575000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh -n functional-575000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cp functional-575000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd3251745179/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh -n functional-575000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh -n functional-575000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-575000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-b5wd6" [a04eece3-1da1-42fd-a917-d94f3f79a6aa] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-b5wd6" [a04eece3-1da1-42fd-a917-d94f3f79a6aa] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.002897135s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-575000 exec mysql-6cdb49bbb-b5wd6 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-575000 exec mysql-6cdb49bbb-b5wd6 -- mysql -ppassword -e "show databases;": exit status 1 (153.33385ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0917 10:16:42.414776    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1807: (dbg) Run:  kubectl --context functional-575000 exec mysql-6cdb49bbb-b5wd6 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-575000 exec mysql-6cdb49bbb-b5wd6 -- mysql -ppassword -e "show databases;": exit status 1 (108.347198ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-575000 exec mysql-6cdb49bbb-b5wd6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2121/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo cat /etc/test/nested/copy/2121/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2121.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo cat /etc/ssl/certs/2121.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2121.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo cat /usr/share/ca-certificates/2121.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/21212.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo cat /etc/ssl/certs/21212.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/21212.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo cat /usr/share/ca-certificates/21212.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-575000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 ssh "sudo systemctl is-active crio": exit status 1 (161.551962ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-575000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-575000 | 7168e0b30fc70 | 30B    |
| docker.io/kicbase/echo-server               | functional-575000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| localhost/my-image                          | functional-575000 | 23239d4fa8901 | 1.24MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-575000 image ls --format table --alsologtostderr:
I0917 10:17:20.471722    3784 out.go:345] Setting OutFile to fd 1 ...
I0917 10:17:20.472018    3784 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:17:20.472024    3784 out.go:358] Setting ErrFile to fd 2...
I0917 10:17:20.472027    3784 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:17:20.472222    3784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
I0917 10:17:20.472865    3784 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:17:20.472959    3784 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:17:20.473323    3784 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 10:17:20.473365    3784 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 10:17:20.482108    3784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
I0917 10:17:20.482536    3784 main.go:141] libmachine: () Calling .GetVersion
I0917 10:17:20.482932    3784 main.go:141] libmachine: Using API Version  1
I0917 10:17:20.482945    3784 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 10:17:20.483184    3784 main.go:141] libmachine: () Calling .GetMachineName
I0917 10:17:20.483308    3784 main.go:141] libmachine: (functional-575000) Calling .GetState
I0917 10:17:20.483396    3784 main.go:141] libmachine: (functional-575000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 10:17:20.483459    3784 main.go:141] libmachine: (functional-575000) DBG | hyperkit pid from json: 2992
I0917 10:17:20.484872    3784 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 10:17:20.484895    3784 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 10:17:20.493356    3784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
I0917 10:17:20.493722    3784 main.go:141] libmachine: () Calling .GetVersion
I0917 10:17:20.494035    3784 main.go:141] libmachine: Using API Version  1
I0917 10:17:20.494045    3784 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 10:17:20.494270    3784 main.go:141] libmachine: () Calling .GetMachineName
I0917 10:17:20.494382    3784 main.go:141] libmachine: (functional-575000) Calling .DriverName
I0917 10:17:20.494554    3784 ssh_runner.go:195] Run: systemctl --version
I0917 10:17:20.494572    3784 main.go:141] libmachine: (functional-575000) Calling .GetSSHHostname
I0917 10:17:20.494654    3784 main.go:141] libmachine: (functional-575000) Calling .GetSSHPort
I0917 10:17:20.494761    3784 main.go:141] libmachine: (functional-575000) Calling .GetSSHKeyPath
I0917 10:17:20.494849    3784 main.go:141] libmachine: (functional-575000) Calling .GetSSHUsername
I0917 10:17:20.494945    3784 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/functional-575000/id_rsa Username:docker}
I0917 10:17:20.530883    3784 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0917 10:17:20.550999    3784 main.go:141] libmachine: Making call to close driver server
I0917 10:17:20.551011    3784 main.go:141] libmachine: (functional-575000) Calling .Close
I0917 10:17:20.551175    3784 main.go:141] libmachine: Successfully made call to close driver server
I0917 10:17:20.551186    3784 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 10:17:20.551193    3784 main.go:141] libmachine: Making call to close driver server
I0917 10:17:20.551199    3784 main.go:141] libmachine: (functional-575000) Calling .Close
I0917 10:17:20.551200    3784 main.go:141] libmachine: (functional-575000) DBG | Closing plugin on server side
I0917 10:17:20.551408    3784 main.go:141] libmachine: (functional-575000) DBG | Closing plugin on server side
I0917 10:17:20.551458    3784 main.go:141] libmachine: Successfully made call to close driver server
I0917 10:17:20.551484    3784 main.go:141] libmachine: Making call to close connection to plugin binary
2024/09/17 10:17:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-575000 image ls --format json --alsologtostderr:
[{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d3
28d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-575000"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["reg
istry.k8s.io/pause:3.10"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"23239d4fa8901ce7496b935894cf77d26dec1ff33e70af03cf0529d3bd70eb82","repoDigests":[],"repoTags":["localhost/my-image:functional-575000"],"size":"1240000"},{"id":"7168e0b30fc7062371003b7c7f68ffb27e1dbd91aaac6f14117636e7137121d5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-575000"],"size":"30"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-575000 image ls --format json --alsologtostderr:
I0917 10:17:20.305076    3780 out.go:345] Setting OutFile to fd 1 ...
I0917 10:17:20.305355    3780 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:17:20.305361    3780 out.go:358] Setting ErrFile to fd 2...
I0917 10:17:20.305365    3780 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:17:20.305570    3780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
I0917 10:17:20.306262    3780 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:17:20.306372    3780 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:17:20.306748    3780 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 10:17:20.306800    3780 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 10:17:20.315406    3780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51023
I0917 10:17:20.315831    3780 main.go:141] libmachine: () Calling .GetVersion
I0917 10:17:20.316233    3780 main.go:141] libmachine: Using API Version  1
I0917 10:17:20.316243    3780 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 10:17:20.316489    3780 main.go:141] libmachine: () Calling .GetMachineName
I0917 10:17:20.316611    3780 main.go:141] libmachine: (functional-575000) Calling .GetState
I0917 10:17:20.316723    3780 main.go:141] libmachine: (functional-575000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 10:17:20.316810    3780 main.go:141] libmachine: (functional-575000) DBG | hyperkit pid from json: 2992
I0917 10:17:20.318351    3780 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 10:17:20.318374    3780 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 10:17:20.327133    3780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51025
I0917 10:17:20.327491    3780 main.go:141] libmachine: () Calling .GetVersion
I0917 10:17:20.327820    3780 main.go:141] libmachine: Using API Version  1
I0917 10:17:20.327840    3780 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 10:17:20.328119    3780 main.go:141] libmachine: () Calling .GetMachineName
I0917 10:17:20.328237    3780 main.go:141] libmachine: (functional-575000) Calling .DriverName
I0917 10:17:20.328393    3780 ssh_runner.go:195] Run: systemctl --version
I0917 10:17:20.328417    3780 main.go:141] libmachine: (functional-575000) Calling .GetSSHHostname
I0917 10:17:20.328499    3780 main.go:141] libmachine: (functional-575000) Calling .GetSSHPort
I0917 10:17:20.328584    3780 main.go:141] libmachine: (functional-575000) Calling .GetSSHKeyPath
I0917 10:17:20.328677    3780 main.go:141] libmachine: (functional-575000) Calling .GetSSHUsername
I0917 10:17:20.328763    3780 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/functional-575000/id_rsa Username:docker}
I0917 10:17:20.367387    3780 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0917 10:17:20.390406    3780 main.go:141] libmachine: Making call to close driver server
I0917 10:17:20.390420    3780 main.go:141] libmachine: (functional-575000) Calling .Close
I0917 10:17:20.390566    3780 main.go:141] libmachine: Successfully made call to close driver server
I0917 10:17:20.390575    3780 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 10:17:20.390584    3780 main.go:141] libmachine: Making call to close driver server
I0917 10:17:20.390591    3780 main.go:141] libmachine: (functional-575000) Calling .Close
I0917 10:17:20.390749    3780 main.go:141] libmachine: Successfully made call to close driver server
I0917 10:17:20.390758    3780 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 10:17:20.390776    3780 main.go:141] libmachine: (functional-575000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-575000 image ls --format yaml --alsologtostderr:
- id: 7168e0b30fc7062371003b7c7f68ffb27e1dbd91aaac6f14117636e7137121d5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-575000
size: "30"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-575000
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-575000 image ls --format yaml --alsologtostderr:
I0917 10:17:17.892542    3763 out.go:345] Setting OutFile to fd 1 ...
I0917 10:17:17.892747    3763 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:17:17.892752    3763 out.go:358] Setting ErrFile to fd 2...
I0917 10:17:17.892756    3763 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:17:17.892941    3763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
I0917 10:17:17.893620    3763 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:17:17.893714    3763 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:17:17.894054    3763 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 10:17:17.894104    3763 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 10:17:17.902704    3763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50991
I0917 10:17:17.903150    3763 main.go:141] libmachine: () Calling .GetVersion
I0917 10:17:17.903599    3763 main.go:141] libmachine: Using API Version  1
I0917 10:17:17.903630    3763 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 10:17:17.903899    3763 main.go:141] libmachine: () Calling .GetMachineName
I0917 10:17:17.904038    3763 main.go:141] libmachine: (functional-575000) Calling .GetState
I0917 10:17:17.904147    3763 main.go:141] libmachine: (functional-575000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 10:17:17.904221    3763 main.go:141] libmachine: (functional-575000) DBG | hyperkit pid from json: 2992
I0917 10:17:17.905713    3763 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 10:17:17.905755    3763 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 10:17:17.914299    3763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50994
I0917 10:17:17.914662    3763 main.go:141] libmachine: () Calling .GetVersion
I0917 10:17:17.914980    3763 main.go:141] libmachine: Using API Version  1
I0917 10:17:17.914990    3763 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 10:17:17.915226    3763 main.go:141] libmachine: () Calling .GetMachineName
I0917 10:17:17.915358    3763 main.go:141] libmachine: (functional-575000) Calling .DriverName
I0917 10:17:17.915528    3763 ssh_runner.go:195] Run: systemctl --version
I0917 10:17:17.915547    3763 main.go:141] libmachine: (functional-575000) Calling .GetSSHHostname
I0917 10:17:17.915618    3763 main.go:141] libmachine: (functional-575000) Calling .GetSSHPort
I0917 10:17:17.915701    3763 main.go:141] libmachine: (functional-575000) Calling .GetSSHKeyPath
I0917 10:17:17.915775    3763 main.go:141] libmachine: (functional-575000) Calling .GetSSHUsername
I0917 10:17:17.915866    3763 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/functional-575000/id_rsa Username:docker}
I0917 10:17:17.952442    3763 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0917 10:17:17.968124    3763 main.go:141] libmachine: Making call to close driver server
I0917 10:17:17.968133    3763 main.go:141] libmachine: (functional-575000) Calling .Close
I0917 10:17:17.968284    3763 main.go:141] libmachine: Successfully made call to close driver server
I0917 10:17:17.968297    3763 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 10:17:17.968300    3763 main.go:141] libmachine: (functional-575000) DBG | Closing plugin on server side
I0917 10:17:17.968306    3763 main.go:141] libmachine: Making call to close driver server
I0917 10:17:17.968312    3763 main.go:141] libmachine: (functional-575000) Calling .Close
I0917 10:17:17.968431    3763 main.go:141] libmachine: (functional-575000) DBG | Closing plugin on server side
I0917 10:17:17.968488    3763 main.go:141] libmachine: Successfully made call to close driver server
I0917 10:17:17.968519    3763 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 ssh pgrep buildkitd: exit status 1 (140.212439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image build -t localhost/my-image:functional-575000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-575000 image build -t localhost/my-image:functional-575000 testdata/build --alsologtostderr: (1.94778643s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-575000 image build -t localhost/my-image:functional-575000 testdata/build --alsologtostderr:
I0917 10:17:18.190609    3772 out.go:345] Setting OutFile to fd 1 ...
I0917 10:17:18.190982    3772 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:17:18.190988    3772 out.go:358] Setting ErrFile to fd 2...
I0917 10:17:18.190992    3772 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 10:17:18.191193    3772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
I0917 10:17:18.191864    3772 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:17:18.192550    3772 config.go:182] Loaded profile config "functional-575000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 10:17:18.192919    3772 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 10:17:18.192960    3772 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 10:17:18.201609    3772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51008
I0917 10:17:18.202060    3772 main.go:141] libmachine: () Calling .GetVersion
I0917 10:17:18.202476    3772 main.go:141] libmachine: Using API Version  1
I0917 10:17:18.202491    3772 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 10:17:18.202708    3772 main.go:141] libmachine: () Calling .GetMachineName
I0917 10:17:18.202823    3772 main.go:141] libmachine: (functional-575000) Calling .GetState
I0917 10:17:18.202915    3772 main.go:141] libmachine: (functional-575000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 10:17:18.202986    3772 main.go:141] libmachine: (functional-575000) DBG | hyperkit pid from json: 2992
I0917 10:17:18.204414    3772 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 10:17:18.204437    3772 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 10:17:18.213224    3772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51010
I0917 10:17:18.213594    3772 main.go:141] libmachine: () Calling .GetVersion
I0917 10:17:18.213926    3772 main.go:141] libmachine: Using API Version  1
I0917 10:17:18.213940    3772 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 10:17:18.214195    3772 main.go:141] libmachine: () Calling .GetMachineName
I0917 10:17:18.214321    3772 main.go:141] libmachine: (functional-575000) Calling .DriverName
I0917 10:17:18.214482    3772 ssh_runner.go:195] Run: systemctl --version
I0917 10:17:18.214499    3772 main.go:141] libmachine: (functional-575000) Calling .GetSSHHostname
I0917 10:17:18.214571    3772 main.go:141] libmachine: (functional-575000) Calling .GetSSHPort
I0917 10:17:18.214651    3772 main.go:141] libmachine: (functional-575000) Calling .GetSSHKeyPath
I0917 10:17:18.214729    3772 main.go:141] libmachine: (functional-575000) Calling .GetSSHUsername
I0917 10:17:18.214826    3772 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/functional-575000/id_rsa Username:docker}
I0917 10:17:18.254366    3772 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2010285401.tar
I0917 10:17:18.254447    3772 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 10:17:18.262580    3772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2010285401.tar
I0917 10:17:18.265990    3772 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2010285401.tar: stat -c "%s %y" /var/lib/minikube/build/build.2010285401.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2010285401.tar': No such file or directory
I0917 10:17:18.266016    3772 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2010285401.tar --> /var/lib/minikube/build/build.2010285401.tar (3072 bytes)
I0917 10:17:18.302713    3772 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2010285401
I0917 10:17:18.321669    3772 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2010285401 -xf /var/lib/minikube/build/build.2010285401.tar
I0917 10:17:18.333112    3772 docker.go:360] Building image: /var/lib/minikube/build/build.2010285401
I0917 10:17:18.333223    3772 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-575000 /var/lib/minikube/build/build.2010285401
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:23239d4fa8901ce7496b935894cf77d26dec1ff33e70af03cf0529d3bd70eb82 done
#8 naming to localhost/my-image:functional-575000 done
#8 DONE 0.0s
I0917 10:17:20.017628    3772 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-575000 /var/lib/minikube/build/build.2010285401: (1.684379129s)
I0917 10:17:20.017698    3772 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2010285401
I0917 10:17:20.028026    3772 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2010285401.tar
I0917 10:17:20.036803    3772 build_images.go:217] Built localhost/my-image:functional-575000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2010285401.tar
I0917 10:17:20.036828    3772 build_images.go:133] succeeded building to: functional-575000
I0917 10:17:20.036833    3772 build_images.go:134] failed building to: 
I0917 10:17:20.036851    3772 main.go:141] libmachine: Making call to close driver server
I0917 10:17:20.036858    3772 main.go:141] libmachine: (functional-575000) Calling .Close
I0917 10:17:20.037021    3772 main.go:141] libmachine: (functional-575000) DBG | Closing plugin on server side
I0917 10:17:20.037047    3772 main.go:141] libmachine: Successfully made call to close driver server
I0917 10:17:20.037056    3772 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 10:17:20.037065    3772 main.go:141] libmachine: Making call to close driver server
I0917 10:17:20.037070    3772 main.go:141] libmachine: (functional-575000) Calling .Close
I0917 10:17:20.037190    3772 main.go:141] libmachine: (functional-575000) DBG | Closing plugin on server side
I0917 10:17:20.037214    3772 main.go:141] libmachine: Successfully made call to close driver server
I0917 10:17:20.037222    3772 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.808914731s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-575000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-575000 docker-env) && out/minikube-darwin-amd64 status -p functional-575000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-575000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image load --daemon kicbase/echo-server:functional-575000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image load --daemon kicbase/echo-server:functional-575000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-575000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image load --daemon kicbase/echo-server:functional-575000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image save kicbase/echo-server:functional-575000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image rm kicbase/echo-server:functional-575000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-575000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 image save --daemon kicbase/echo-server:functional-575000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-575000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-575000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-575000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-fjchq" [886e0e4b-a1b1-4e12-acfb-c57752e6a473] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-fjchq" [886e0e4b-a1b1-4e12-acfb-c57752e6a473] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.005732885s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-575000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-575000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-575000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-575000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3467: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-575000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-575000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8578cb3b-f4b2-4f23-ba88-813ac1cb21dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8578cb3b-f4b2-4f23-ba88-813ac1cb21dd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002834919s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 service list -o json
functional_test.go:1494: Took "375.353838ms" to run "out/minikube-darwin-amd64 -p functional-575000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:32129
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:32129
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-575000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.230.59 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-575000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "188.533347ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "79.596764ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "180.509048ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "79.357947ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2435102411/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726593425568039000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2435102411/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726593425568039000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2435102411/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726593425568039000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2435102411/001/test-1726593425568039000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.022729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 17:17 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 17:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 17:17 test-1726593425568039000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh cat /mount-9p/test-1726593425568039000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-575000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d2fbbe50-1264-4a7e-a2ad-050c0d167a2e] Pending
helpers_test.go:344: "busybox-mount" [d2fbbe50-1264-4a7e-a2ad-050c0d167a2e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d2fbbe50-1264-4a7e-a2ad-050c0d167a2e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d2fbbe50-1264-4a7e-a2ad-050c0d167a2e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00524482s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-575000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2435102411/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1922899116/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.573171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1922899116/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 ssh "sudo umount -f /mount-9p": exit status 1 (130.909801ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-575000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1922899116/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1298326498/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1298326498/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1298326498/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T" /mount1: exit status 1 (156.359202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T" /mount1: exit status 1 (217.227196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-575000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-575000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1298326498/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1298326498/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-575000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1298326498/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-575000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-575000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-575000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (184.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-744000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0917 10:18:58.509267    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:19:26.257449    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-744000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m4.067253551s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (184.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-744000 -- rollout status deployment/busybox: (6.669296336s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-cn52t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcdwg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcq64 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-cn52t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcdwg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcq64 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-cn52t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcdwg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcq64 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-cn52t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-cn52t -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcdwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcdwg -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcq64 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-744000 -- exec busybox-7dff88458-qcq64 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-744000 -v=7 --alsologtostderr
E0917 10:21:19.973208    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:19.979905    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:19.992675    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:20.016021    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:20.057319    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:20.139182    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:20.300715    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:20.623054    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:21.266158    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:22.549117    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:25.110828    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:21:30.232975    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-744000 -v=7 --alsologtostderr: (49.862158541s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-744000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp testdata/cp-test.txt ha-744000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000:/home/docker/cp-test.txt ha-744000-m02:/home/docker/cp-test_ha-744000_ha-744000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m02 "sudo cat /home/docker/cp-test_ha-744000_ha-744000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000:/home/docker/cp-test.txt ha-744000-m03:/home/docker/cp-test_ha-744000_ha-744000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m03 "sudo cat /home/docker/cp-test_ha-744000_ha-744000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000:/home/docker/cp-test.txt ha-744000-m04:/home/docker/cp-test_ha-744000_ha-744000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m04 "sudo cat /home/docker/cp-test_ha-744000_ha-744000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp testdata/cp-test.txt ha-744000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m02:/home/docker/cp-test.txt ha-744000:/home/docker/cp-test_ha-744000-m02_ha-744000.txt
E0917 10:21:40.475310    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000 "sudo cat /home/docker/cp-test_ha-744000-m02_ha-744000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m02:/home/docker/cp-test.txt ha-744000-m03:/home/docker/cp-test_ha-744000-m02_ha-744000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m03 "sudo cat /home/docker/cp-test_ha-744000-m02_ha-744000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m02:/home/docker/cp-test.txt ha-744000-m04:/home/docker/cp-test_ha-744000-m02_ha-744000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m04 "sudo cat /home/docker/cp-test_ha-744000-m02_ha-744000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp testdata/cp-test.txt ha-744000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt ha-744000:/home/docker/cp-test_ha-744000-m03_ha-744000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000 "sudo cat /home/docker/cp-test_ha-744000-m03_ha-744000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt ha-744000-m02:/home/docker/cp-test_ha-744000-m03_ha-744000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m02 "sudo cat /home/docker/cp-test_ha-744000-m03_ha-744000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m03:/home/docker/cp-test.txt ha-744000-m04:/home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m04 "sudo cat /home/docker/cp-test_ha-744000-m03_ha-744000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp testdata/cp-test.txt ha-744000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3062395547/001/cp-test_ha-744000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt ha-744000:/home/docker/cp-test_ha-744000-m04_ha-744000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000 "sudo cat /home/docker/cp-test_ha-744000-m04_ha-744000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt ha-744000-m02:/home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m02 "sudo cat /home/docker/cp-test_ha-744000-m04_ha-744000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 cp ha-744000-m04:/home/docker/cp-test.txt ha-744000-m03:/home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 ssh -n ha-744000-m03 "sudo cat /home/docker/cp-test_ha-744000-m04_ha-744000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 node stop m02 -v=7 --alsologtostderr: (8.326634026s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr: exit status 7 (352.384208ms)

                                                
                                                
-- stdout --
	ha-744000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-744000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-744000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:21:54.656533    4254 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:21:54.656809    4254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:21:54.656815    4254 out.go:358] Setting ErrFile to fd 2...
	I0917 10:21:54.656818    4254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:21:54.657492    4254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:21:54.657973    4254 out.go:352] Setting JSON to false
	I0917 10:21:54.657999    4254 mustload.go:65] Loading cluster: ha-744000
	I0917 10:21:54.658043    4254 notify.go:220] Checking for updates...
	I0917 10:21:54.658345    4254 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:21:54.658358    4254 status.go:255] checking status of ha-744000 ...
	I0917 10:21:54.658758    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.658791    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.667729    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51761
	I0917 10:21:54.668071    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.668456    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.668467    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.668677    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.668781    4254 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:21:54.668860    4254 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:21:54.668942    4254 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 3812
	I0917 10:21:54.670082    4254 status.go:330] ha-744000 host status = "Running" (err=<nil>)
	I0917 10:21:54.670098    4254 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:21:54.670354    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.670378    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.678740    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51763
	I0917 10:21:54.679110    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.679480    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.679498    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.679699    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.679801    4254 main.go:141] libmachine: (ha-744000) Calling .GetIP
	I0917 10:21:54.679887    4254 host.go:66] Checking if "ha-744000" exists ...
	I0917 10:21:54.680143    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.680172    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.690772    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51765
	I0917 10:21:54.691123    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.691445    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.691459    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.691654    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.691762    4254 main.go:141] libmachine: (ha-744000) Calling .DriverName
	I0917 10:21:54.691922    4254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:21:54.691945    4254 main.go:141] libmachine: (ha-744000) Calling .GetSSHHostname
	I0917 10:21:54.692028    4254 main.go:141] libmachine: (ha-744000) Calling .GetSSHPort
	I0917 10:21:54.692131    4254 main.go:141] libmachine: (ha-744000) Calling .GetSSHKeyPath
	I0917 10:21:54.692246    4254 main.go:141] libmachine: (ha-744000) Calling .GetSSHUsername
	I0917 10:21:54.692336    4254 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000/id_rsa Username:docker}
	I0917 10:21:54.724239    4254 ssh_runner.go:195] Run: systemctl --version
	I0917 10:21:54.728536    4254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:21:54.740513    4254 kubeconfig.go:125] found "ha-744000" server: "https://192.169.0.254:8443"
	I0917 10:21:54.740536    4254 api_server.go:166] Checking apiserver status ...
	I0917 10:21:54.740580    4254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:21:54.752472    4254 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1904/cgroup
	W0917 10:21:54.761245    4254 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1904/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:21:54.761309    4254 ssh_runner.go:195] Run: ls
	I0917 10:21:54.764491    4254 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0917 10:21:54.767652    4254 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0917 10:21:54.767664    4254 status.go:422] ha-744000 apiserver status = Running (err=<nil>)
	I0917 10:21:54.767672    4254 status.go:257] ha-744000 status: &{Name:ha-744000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:21:54.767683    4254 status.go:255] checking status of ha-744000-m02 ...
	I0917 10:21:54.767946    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.767969    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.776757    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
	I0917 10:21:54.777118    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.777441    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.777451    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.777674    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.777789    4254 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:21:54.777866    4254 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:21:54.777941    4254 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 3822
	I0917 10:21:54.779031    4254 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 3822 missing from process table
	I0917 10:21:54.779048    4254 status.go:330] ha-744000-m02 host status = "Stopped" (err=<nil>)
	I0917 10:21:54.779056    4254 status.go:343] host is not running, skipping remaining checks
	I0917 10:21:54.779062    4254 status.go:257] ha-744000-m02 status: &{Name:ha-744000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:21:54.779074    4254 status.go:255] checking status of ha-744000-m03 ...
	I0917 10:21:54.779381    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.779404    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.788050    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51771
	I0917 10:21:54.788406    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.788723    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.788738    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.788962    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.789071    4254 main.go:141] libmachine: (ha-744000-m03) Calling .GetState
	I0917 10:21:54.789153    4254 main.go:141] libmachine: (ha-744000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:21:54.789263    4254 main.go:141] libmachine: (ha-744000-m03) DBG | hyperkit pid from json: 3837
	I0917 10:21:54.790351    4254 status.go:330] ha-744000-m03 host status = "Running" (err=<nil>)
	I0917 10:21:54.790360    4254 host.go:66] Checking if "ha-744000-m03" exists ...
	I0917 10:21:54.790621    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.790651    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.799205    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51773
	I0917 10:21:54.799556    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.799897    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.799917    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.800134    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.800240    4254 main.go:141] libmachine: (ha-744000-m03) Calling .GetIP
	I0917 10:21:54.800319    4254 host.go:66] Checking if "ha-744000-m03" exists ...
	I0917 10:21:54.800581    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.800603    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.809088    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51775
	I0917 10:21:54.809498    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.809868    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.809889    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.810119    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.810275    4254 main.go:141] libmachine: (ha-744000-m03) Calling .DriverName
	I0917 10:21:54.810470    4254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:21:54.810485    4254 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHHostname
	I0917 10:21:54.810614    4254 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHPort
	I0917 10:21:54.810710    4254 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHKeyPath
	I0917 10:21:54.810824    4254 main.go:141] libmachine: (ha-744000-m03) Calling .GetSSHUsername
	I0917 10:21:54.810931    4254 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m03/id_rsa Username:docker}
	I0917 10:21:54.840308    4254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:21:54.850968    4254 kubeconfig.go:125] found "ha-744000" server: "https://192.169.0.254:8443"
	I0917 10:21:54.850983    4254 api_server.go:166] Checking apiserver status ...
	I0917 10:21:54.851025    4254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:21:54.862763    4254 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0917 10:21:54.870337    4254 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:21:54.870402    4254 ssh_runner.go:195] Run: ls
	I0917 10:21:54.873568    4254 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0917 10:21:54.876698    4254 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0917 10:21:54.876708    4254 status.go:422] ha-744000-m03 apiserver status = Running (err=<nil>)
	I0917 10:21:54.876715    4254 status.go:257] ha-744000-m03 status: &{Name:ha-744000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:21:54.876725    4254 status.go:255] checking status of ha-744000-m04 ...
	I0917 10:21:54.877000    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.877019    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.885636    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51779
	I0917 10:21:54.886018    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.886350    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.886367    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.886574    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.886683    4254 main.go:141] libmachine: (ha-744000-m04) Calling .GetState
	I0917 10:21:54.886761    4254 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:21:54.886844    4254 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 3930
	I0917 10:21:54.887950    4254 status.go:330] ha-744000-m04 host status = "Running" (err=<nil>)
	I0917 10:21:54.887959    4254 host.go:66] Checking if "ha-744000-m04" exists ...
	I0917 10:21:54.888209    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.888231    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.896608    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I0917 10:21:54.896956    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.897253    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.897263    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.897496    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.897605    4254 main.go:141] libmachine: (ha-744000-m04) Calling .GetIP
	I0917 10:21:54.897681    4254 host.go:66] Checking if "ha-744000-m04" exists ...
	I0917 10:21:54.897949    4254 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:21:54.897975    4254 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:21:54.906328    4254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51783
	I0917 10:21:54.906666    4254 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:21:54.906976    4254 main.go:141] libmachine: Using API Version  1
	I0917 10:21:54.906988    4254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:21:54.907207    4254 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:21:54.907312    4254 main.go:141] libmachine: (ha-744000-m04) Calling .DriverName
	I0917 10:21:54.907431    4254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:21:54.907442    4254 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHHostname
	I0917 10:21:54.907538    4254 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHPort
	I0917 10:21:54.907616    4254 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHKeyPath
	I0917 10:21:54.907701    4254 main.go:141] libmachine: (ha-744000-m04) Calling .GetSSHUsername
	I0917 10:21:54.907776    4254 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/ha-744000-m04/id_rsa Username:docker}
	I0917 10:21:54.940764    4254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:21:54.953214    4254 status.go:257] ha-744000-m04 status: &{Name:ha-744000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 node start m02 -v=7 --alsologtostderr
E0917 10:22:00.957808    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 node start m02 -v=7 --alsologtostderr: (41.094740512s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 stop -v=7 --alsologtostderr
E0917 10:26:47.683889    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-744000 stop -v=7 --alsologtostderr: (24.891257997s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-744000 status -v=7 --alsologtostderr: exit status 7 (90.481185ms)

                                                
                                                
-- stdout --
	ha-744000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:26:58.366706    4443 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:26:58.366971    4443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.366976    4443 out.go:358] Setting ErrFile to fd 2...
	I0917 10:26:58.366980    4443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:26:58.367152    4443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:26:58.367319    4443 out.go:352] Setting JSON to false
	I0917 10:26:58.367343    4443 mustload.go:65] Loading cluster: ha-744000
	I0917 10:26:58.367383    4443 notify.go:220] Checking for updates...
	I0917 10:26:58.367648    4443 config.go:182] Loaded profile config "ha-744000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:26:58.367669    4443 status.go:255] checking status of ha-744000 ...
	I0917 10:26:58.368102    4443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.368147    4443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.377143    4443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52135
	I0917 10:26:58.377481    4443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.377980    4443 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.377991    4443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.378264    4443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.378404    4443 main.go:141] libmachine: (ha-744000) Calling .GetState
	I0917 10:26:58.378493    4443 main.go:141] libmachine: (ha-744000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.378550    4443 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid from json: 4331
	I0917 10:26:58.379547    4443 main.go:141] libmachine: (ha-744000) DBG | hyperkit pid 4331 missing from process table
	I0917 10:26:58.379592    4443 status.go:330] ha-744000 host status = "Stopped" (err=<nil>)
	I0917 10:26:58.379601    4443 status.go:343] host is not running, skipping remaining checks
	I0917 10:26:58.379607    4443 status.go:257] ha-744000 status: &{Name:ha-744000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:26:58.379627    4443 status.go:255] checking status of ha-744000-m02 ...
	I0917 10:26:58.379897    4443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.379919    4443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.388333    4443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52137
	I0917 10:26:58.388687    4443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.389078    4443 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.389100    4443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.389344    4443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.389459    4443 main.go:141] libmachine: (ha-744000-m02) Calling .GetState
	I0917 10:26:58.389548    4443 main.go:141] libmachine: (ha-744000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.389621    4443 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid from json: 4339
	I0917 10:26:58.390609    4443 main.go:141] libmachine: (ha-744000-m02) DBG | hyperkit pid 4339 missing from process table
	I0917 10:26:58.390640    4443 status.go:330] ha-744000-m02 host status = "Stopped" (err=<nil>)
	I0917 10:26:58.390648    4443 status.go:343] host is not running, skipping remaining checks
	I0917 10:26:58.390654    4443 status.go:257] ha-744000-m02 status: &{Name:ha-744000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:26:58.390674    4443 status.go:255] checking status of ha-744000-m04 ...
	I0917 10:26:58.390918    4443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:26:58.390940    4443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:26:58.399454    4443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52139
	I0917 10:26:58.399827    4443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:26:58.400184    4443 main.go:141] libmachine: Using API Version  1
	I0917 10:26:58.400202    4443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:26:58.400409    4443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:26:58.400539    4443 main.go:141] libmachine: (ha-744000-m04) Calling .GetState
	I0917 10:26:58.400626    4443 main.go:141] libmachine: (ha-744000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:26:58.400708    4443 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid from json: 4356
	I0917 10:26:58.401719    4443 main.go:141] libmachine: (ha-744000-m04) DBG | hyperkit pid 4356 missing from process table
	I0917 10:26:58.401735    4443 status.go:330] ha-744000-m04 host status = "Stopped" (err=<nil>)
	I0917 10:26:58.401741    4443 status.go:343] host is not running, skipping remaining checks
	I0917 10:26:58.401749    4443 status.go:257] ha-744000-m04 status: &{Name:ha-744000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-159000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-159000 --driver=hyperkit : (37.758291048s)
--- PASS: TestImageBuild/serial/Setup (37.76s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-159000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-159000: (1.806578906s)
--- PASS: TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-159000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-159000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-159000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-362000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0917 10:30:21.625352    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 10:31:19.978104    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-362000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m25.093502859s)
--- PASS: TestJSONOutput/start/Command (85.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-362000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-362000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-362000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-362000 --output=json --user=testUser: (8.331115802s)
--- PASS: TestJSONOutput/stop/Command (8.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.59s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-597000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-597000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (373.718082ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47f395fc-1be6-445e-8685-3902bb43d1a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-597000] minikube v1.34.0 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f9ef7d4-1f24-480c-83b1-98bc8992d6a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"f652386d-4310-4645-88c1-6f9c04fdd2d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig"}}
	{"specversion":"1.0","id":"8050f4f2-8be8-4a91-afb3-b1853e8a27b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"b6c603a9-9f96-4a6f-b5db-846f6fec97b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cc2db0b8-9a26-4849-972b-4fa77ddd7e00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube"}}
	{"specversion":"1.0","id":"bd5a290a-bb3a-43ab-8a07-5bff25178799","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5cb4dcb0-3d79-4c2e-b24b-d1e308dedc8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-597000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-597000
--- PASS: TestErrorJSONOutput (0.59s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (89.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-035000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-035000 --driver=hyperkit : (40.461714951s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-048000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-048000 --driver=hyperkit : (37.985327859s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-035000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-048000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-048000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-048000: (5.243462144s)
helpers_test.go:175: Cleaning up "first-035000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-035000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-035000: (5.277934569s)
--- PASS: TestMinikubeProfile (89.80s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-593000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0917 10:36:19.997955    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-593000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m51.623845859s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-593000 -- rollout status deployment/busybox: (4.198170274s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-kqr5w -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-lhr7k -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-kqr5w -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-lhr7k -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-kqr5w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-lhr7k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-kqr5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-kqr5w -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-lhr7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-593000 -- exec busybox-7dff88458-lhr7k -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-593000 -v 3 --alsologtostderr
E0917 10:37:43.068573    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-593000 -v 3 --alsologtostderr: (45.464796182s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.78s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-593000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp testdata/cp-test.txt multinode-593000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1531750391/001/cp-test_multinode-593000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000:/home/docker/cp-test.txt multinode-593000-m02:/home/docker/cp-test_multinode-593000_multinode-593000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m02 "sudo cat /home/docker/cp-test_multinode-593000_multinode-593000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000:/home/docker/cp-test.txt multinode-593000-m03:/home/docker/cp-test_multinode-593000_multinode-593000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m03 "sudo cat /home/docker/cp-test_multinode-593000_multinode-593000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp testdata/cp-test.txt multinode-593000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1531750391/001/cp-test_multinode-593000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000-m02:/home/docker/cp-test.txt multinode-593000:/home/docker/cp-test_multinode-593000-m02_multinode-593000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000 "sudo cat /home/docker/cp-test_multinode-593000-m02_multinode-593000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000-m02:/home/docker/cp-test.txt multinode-593000-m03:/home/docker/cp-test_multinode-593000-m02_multinode-593000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m03 "sudo cat /home/docker/cp-test_multinode-593000-m02_multinode-593000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp testdata/cp-test.txt multinode-593000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1531750391/001/cp-test_multinode-593000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000-m03:/home/docker/cp-test.txt multinode-593000:/home/docker/cp-test_multinode-593000-m03_multinode-593000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000 "sudo cat /home/docker/cp-test_multinode-593000-m03_multinode-593000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 cp multinode-593000-m03:/home/docker/cp-test.txt multinode-593000-m02:/home/docker/cp-test_multinode-593000-m03_multinode-593000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 ssh -n multinode-593000-m02 "sudo cat /home/docker/cp-test_multinode-593000-m03_multinode-593000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-593000 node stop m03: (2.338841736s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-593000 status: exit status 7 (250.087946ms)

                                                
                                                
-- stdout --
	multinode-593000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-593000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-593000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-593000 status --alsologtostderr: exit status 7 (253.263647ms)

                                                
                                                
-- stdout --
	multinode-593000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-593000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-593000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:38:22.470043    5443 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:38:22.470217    5443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:38:22.470223    5443 out.go:358] Setting ErrFile to fd 2...
	I0917 10:38:22.470226    5443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:38:22.470398    5443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:38:22.470571    5443 out.go:352] Setting JSON to false
	I0917 10:38:22.470594    5443 mustload.go:65] Loading cluster: multinode-593000
	I0917 10:38:22.470637    5443 notify.go:220] Checking for updates...
	I0917 10:38:22.470959    5443 config.go:182] Loaded profile config "multinode-593000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:38:22.470974    5443 status.go:255] checking status of multinode-593000 ...
	I0917 10:38:22.471432    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:38:22.471482    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:38:22.480601    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53066
	I0917 10:38:22.480998    5443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:38:22.481370    5443 main.go:141] libmachine: Using API Version  1
	I0917 10:38:22.481378    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:38:22.481598    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:38:22.481718    5443 main.go:141] libmachine: (multinode-593000) Calling .GetState
	I0917 10:38:22.481814    5443 main.go:141] libmachine: (multinode-593000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:38:22.481876    5443 main.go:141] libmachine: (multinode-593000) DBG | hyperkit pid from json: 5142
	I0917 10:38:22.483172    5443 status.go:330] multinode-593000 host status = "Running" (err=<nil>)
	I0917 10:38:22.483190    5443 host.go:66] Checking if "multinode-593000" exists ...
	I0917 10:38:22.483433    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:38:22.483466    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:38:22.492008    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53068
	I0917 10:38:22.492360    5443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:38:22.492685    5443 main.go:141] libmachine: Using API Version  1
	I0917 10:38:22.492698    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:38:22.492927    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:38:22.493032    5443 main.go:141] libmachine: (multinode-593000) Calling .GetIP
	I0917 10:38:22.493112    5443 host.go:66] Checking if "multinode-593000" exists ...
	I0917 10:38:22.493351    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:38:22.493372    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:38:22.501841    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53070
	I0917 10:38:22.502204    5443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:38:22.502524    5443 main.go:141] libmachine: Using API Version  1
	I0917 10:38:22.502535    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:38:22.502797    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:38:22.502927    5443 main.go:141] libmachine: (multinode-593000) Calling .DriverName
	I0917 10:38:22.503096    5443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:38:22.503119    5443 main.go:141] libmachine: (multinode-593000) Calling .GetSSHHostname
	I0917 10:38:22.503215    5443 main.go:141] libmachine: (multinode-593000) Calling .GetSSHPort
	I0917 10:38:22.503300    5443 main.go:141] libmachine: (multinode-593000) Calling .GetSSHKeyPath
	I0917 10:38:22.503386    5443 main.go:141] libmachine: (multinode-593000) Calling .GetSSHUsername
	I0917 10:38:22.503469    5443 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/multinode-593000/id_rsa Username:docker}
	I0917 10:38:22.535262    5443 ssh_runner.go:195] Run: systemctl --version
	I0917 10:38:22.539818    5443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:38:22.550873    5443 kubeconfig.go:125] found "multinode-593000" server: "https://192.169.0.13:8443"
	I0917 10:38:22.550897    5443 api_server.go:166] Checking apiserver status ...
	I0917 10:38:22.550938    5443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 10:38:22.562582    5443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup
	W0917 10:38:22.570266    5443 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 10:38:22.570318    5443 ssh_runner.go:195] Run: ls
	I0917 10:38:22.573951    5443 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0917 10:38:22.577657    5443 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0917 10:38:22.577669    5443 status.go:422] multinode-593000 apiserver status = Running (err=<nil>)
	I0917 10:38:22.577679    5443 status.go:257] multinode-593000 status: &{Name:multinode-593000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:38:22.577689    5443 status.go:255] checking status of multinode-593000-m02 ...
	I0917 10:38:22.577955    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:38:22.577975    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:38:22.586735    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53074
	I0917 10:38:22.587098    5443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:38:22.587411    5443 main.go:141] libmachine: Using API Version  1
	I0917 10:38:22.587422    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:38:22.587650    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:38:22.587768    5443 main.go:141] libmachine: (multinode-593000-m02) Calling .GetState
	I0917 10:38:22.587854    5443 main.go:141] libmachine: (multinode-593000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:38:22.587923    5443 main.go:141] libmachine: (multinode-593000-m02) DBG | hyperkit pid from json: 5161
	I0917 10:38:22.589244    5443 status.go:330] multinode-593000-m02 host status = "Running" (err=<nil>)
	I0917 10:38:22.589255    5443 host.go:66] Checking if "multinode-593000-m02" exists ...
	I0917 10:38:22.589521    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:38:22.589544    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:38:22.598220    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53076
	I0917 10:38:22.598577    5443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:38:22.598924    5443 main.go:141] libmachine: Using API Version  1
	I0917 10:38:22.598943    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:38:22.599172    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:38:22.599294    5443 main.go:141] libmachine: (multinode-593000-m02) Calling .GetIP
	I0917 10:38:22.599389    5443 host.go:66] Checking if "multinode-593000-m02" exists ...
	I0917 10:38:22.599661    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:38:22.599684    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:38:22.608092    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53078
	I0917 10:38:22.608431    5443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:38:22.608799    5443 main.go:141] libmachine: Using API Version  1
	I0917 10:38:22.608819    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:38:22.609046    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:38:22.609166    5443 main.go:141] libmachine: (multinode-593000-m02) Calling .DriverName
	I0917 10:38:22.609301    5443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 10:38:22.609315    5443 main.go:141] libmachine: (multinode-593000-m02) Calling .GetSSHHostname
	I0917 10:38:22.609400    5443 main.go:141] libmachine: (multinode-593000-m02) Calling .GetSSHPort
	I0917 10:38:22.609490    5443 main.go:141] libmachine: (multinode-593000-m02) Calling .GetSSHKeyPath
	I0917 10:38:22.609573    5443 main.go:141] libmachine: (multinode-593000-m02) Calling .GetSSHUsername
	I0917 10:38:22.609644    5443 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19662-1558/.minikube/machines/multinode-593000-m02/id_rsa Username:docker}
	I0917 10:38:22.644480    5443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 10:38:22.654677    5443 status.go:257] multinode-593000-m02 status: &{Name:multinode-593000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:38:22.654698    5443 status.go:255] checking status of multinode-593000-m03 ...
	I0917 10:38:22.655032    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:38:22.655059    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:38:22.663702    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53081
	I0917 10:38:22.664038    5443 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:38:22.664376    5443 main.go:141] libmachine: Using API Version  1
	I0917 10:38:22.664386    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:38:22.664575    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:38:22.664697    5443 main.go:141] libmachine: (multinode-593000-m03) Calling .GetState
	I0917 10:38:22.664783    5443 main.go:141] libmachine: (multinode-593000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:38:22.664862    5443 main.go:141] libmachine: (multinode-593000-m03) DBG | hyperkit pid from json: 5230
	I0917 10:38:22.666114    5443 main.go:141] libmachine: (multinode-593000-m03) DBG | hyperkit pid 5230 missing from process table
	I0917 10:38:22.666179    5443 status.go:330] multinode-593000-m03 host status = "Stopped" (err=<nil>)
	I0917 10:38:22.666190    5443 status.go:343] host is not running, skipping remaining checks
	I0917 10:38:22.666195    5443 status.go:257] multinode-593000-m03 status: &{Name:multinode-593000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 node start m03 -v=7 --alsologtostderr
E0917 10:38:58.535293    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-593000 node start m03 -v=7 --alsologtostderr: (41.195278305s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (140.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-593000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-593000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-593000: (18.91368459s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-593000 --wait=true -v=8 --alsologtostderr
E0917 10:41:19.999250    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-593000 --wait=true -v=8 --alsologtostderr: (2m1.146659625s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-593000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (140.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-593000 node delete m03: (2.959316915s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-593000 stop: (16.640086888s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-593000 status: exit status 7 (80.992742ms)

                                                
                                                
-- stdout --
	multinode-593000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-593000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-593000 status --alsologtostderr: exit status 7 (84.994862ms)

                                                
                                                
-- stdout --
	multinode-593000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-593000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 10:41:44.482229    5586 out.go:345] Setting OutFile to fd 1 ...
	I0917 10:41:44.482401    5586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:44.482406    5586 out.go:358] Setting ErrFile to fd 2...
	I0917 10:41:44.482410    5586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 10:41:44.482591    5586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19662-1558/.minikube/bin
	I0917 10:41:44.482770    5586 out.go:352] Setting JSON to false
	I0917 10:41:44.482795    5586 mustload.go:65] Loading cluster: multinode-593000
	I0917 10:41:44.482834    5586 notify.go:220] Checking for updates...
	I0917 10:41:44.483098    5586 config.go:182] Loaded profile config "multinode-593000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 10:41:44.483112    5586 status.go:255] checking status of multinode-593000 ...
	I0917 10:41:44.483579    5586 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:41:44.483629    5586 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:41:44.492474    5586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53313
	I0917 10:41:44.492773    5586 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:41:44.493157    5586 main.go:141] libmachine: Using API Version  1
	I0917 10:41:44.493165    5586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:41:44.493364    5586 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:41:44.493482    5586 main.go:141] libmachine: (multinode-593000) Calling .GetState
	I0917 10:41:44.493558    5586 main.go:141] libmachine: (multinode-593000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:41:44.493625    5586 main.go:141] libmachine: (multinode-593000) DBG | hyperkit pid from json: 5500
	I0917 10:41:44.494656    5586 main.go:141] libmachine: (multinode-593000) DBG | hyperkit pid 5500 missing from process table
	I0917 10:41:44.494686    5586 status.go:330] multinode-593000 host status = "Stopped" (err=<nil>)
	I0917 10:41:44.494696    5586 status.go:343] host is not running, skipping remaining checks
	I0917 10:41:44.494701    5586 status.go:257] multinode-593000 status: &{Name:multinode-593000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 10:41:44.494723    5586 status.go:255] checking status of multinode-593000-m02 ...
	I0917 10:41:44.495001    5586 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 10:41:44.495021    5586 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 10:41:44.503495    5586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53315
	I0917 10:41:44.503997    5586 main.go:141] libmachine: () Calling .GetVersion
	I0917 10:41:44.504377    5586 main.go:141] libmachine: Using API Version  1
	I0917 10:41:44.504394    5586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 10:41:44.504584    5586 main.go:141] libmachine: () Calling .GetMachineName
	I0917 10:41:44.510657    5586 main.go:141] libmachine: (multinode-593000-m02) Calling .GetState
	I0917 10:41:44.510774    5586 main.go:141] libmachine: (multinode-593000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 10:41:44.510841    5586 main.go:141] libmachine: (multinode-593000-m02) DBG | hyperkit pid from json: 5519
	I0917 10:41:44.511858    5586 main.go:141] libmachine: (multinode-593000-m02) DBG | hyperkit pid 5519 missing from process table
	I0917 10:41:44.511891    5586 status.go:330] multinode-593000-m02 host status = "Stopped" (err=<nil>)
	I0917 10:41:44.511897    5586 status.go:343] host is not running, skipping remaining checks
	I0917 10:41:44.511903    5586 status.go:257] multinode-593000-m02 status: &{Name:multinode-593000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (97.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-593000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-593000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m37.6031282s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-593000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (97.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-593000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-593000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-593000-m02 --driver=hyperkit : exit status 14 (513.813991ms)

                                                
                                                
-- stdout --
	* [multinode-593000-m02] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-593000-m02' is duplicated with machine name 'multinode-593000-m02' in profile 'multinode-593000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-593000-m03 --driver=hyperkit 
E0917 10:43:58.536838    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-593000-m03 --driver=hyperkit : (38.324407766s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-593000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-593000: exit status 80 (261.88158ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-593000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-593000-m03 already exists in multinode-593000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-593000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-593000-m03: (5.255842043s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.41s)

                                                
                                    
x
+
TestSkaffold (114.21s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3542685258 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3542685258 version: (1.740302288s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-611000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-611000 --memory=2600 --driver=hyperkit : (38.166115603s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3542685258 run --minikube-profile skaffold-611000 --kube-context skaffold-611000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3542685258 run --minikube-profile skaffold-611000 --kube-context skaffold-611000 --status-check=true --port-forward=false --interactive=false: (56.440141446s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-8455567d64-nscdk" [79da1f85-4b0c-4294-9b8a-3225b5528078] Running
E0917 10:51:20.002844    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004483836s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-694d847b98-mqx8s" [51d0d30f-4b7f-4d2b-805c-44a55a8e22e2] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003272514s
helpers_test.go:175: Cleaning up "skaffold-611000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-611000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-611000: (5.249277606s)
--- PASS: TestSkaffold (114.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (111.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3160091076 start -p running-upgrade-004000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3160091076 start -p running-upgrade-004000 --memory=2200 --vm-driver=hyperkit : (1m21.632946213s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-004000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-004000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (22.854423534s)
helpers_test.go:175: Cleaning up "running-upgrade-004000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-004000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-004000: (5.486881154s)
--- PASS: TestRunningBinaryUpgrade (111.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (1496.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-585000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E0917 11:06:16.993554    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:06:19.987971    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:08:58.524725    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:11:03.061805    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:11:16.994170    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:11:19.985574    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:12:40.079747    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:13:58.523632    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:16:16.992585    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:16:19.986255    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:18:58.522097    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:20:21.702196    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:21:17.052692    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:21:20.046792    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:23:58.584134    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:26:17.056853    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:26:20.050109    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:27:43.127908    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:28:58.588840    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/addons-684000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-585000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (23m44.308571267s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-585000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-585000: (8.398287022s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-585000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-585000 status --format={{.Host}}: exit status 7 (69.21034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-585000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-585000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (33.815328063s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-585000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-585000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-585000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (472.322636ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-585000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-585000
	    minikube start -p kubernetes-upgrade-585000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5850002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-585000 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-585000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-585000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (24.166194795s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-585000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-585000: (5.252983852s)
--- PASS: TestKubernetesUpgrade (1496.54s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.23s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19662
- KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1201180014/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1201180014/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1201180014/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1201180014/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.23s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19662
- KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4225055981/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4225055981/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4225055981/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4225055981/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1636554674 start -p stopped-upgrade-003000 --memory=2200 --vm-driver=hyperkit 
E0917 11:29:20.147923    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1636554674 start -p stopped-upgrade-003000 --memory=2200 --vm-driver=hyperkit : (48.373852393s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1636554674 -p stopped-upgrade-003000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1636554674 -p stopped-upgrade-003000 stop: (8.237185576s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-003000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-003000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (49.08228539s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-003000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-003000: (2.962026095s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-747000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-747000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (494.216745ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-747000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19662
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19662-1558/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19662-1558/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-747000 --driver=hyperkit 
E0917 11:31:17.060221    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/skaffold-611000/client.crt: no such file or directory" logger="UnhandledError"
E0917 11:31:20.051522    2121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19662-1558/.minikube/profiles/functional-575000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-747000 --driver=hyperkit : (1m37.453258796s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-747000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (57.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-747000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-747000 --no-kubernetes --driver=hyperkit : (54.851698876s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-747000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-747000 status -o json: exit status 2 (146.586749ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-747000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-747000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-747000: (2.380322247s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (57.38s)

                                                
                                    

Test skip (18/214)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard